My video card can store a 34359.738368MB 3D texture

K.I.L.E.R

Retarded moron
Veteran
What is wrong with my calculations?
2048^3 * 4 * 1e-6 = 34359.738368MB

Can someone explain to me how is it possible to store a 3D image with the following properties in a 128MB video card(R300)?

w = h = d = 2048, GL_RGBA with borders. :???:

The texture proxy returns that this is possible.
 
What is wrong with my calculations?
2048^3 * 4 * 1e-6 = 34359.738368MB
The only real change to make here is the 1 MB != 1,000,000 bytes. 1 MB = 2^20 bytes = 1,048,576.
If you use that, you should get exactly 32768 MB (which is what it should be, exactly 32 GB).

Can someone explain to me how is it possible to store a 3D image with the following properties in a 128MB video card(R300)?
Well, aside from the fact that your texture need not be wholly stored within the card at once (it can be buffered a few layers at a time)... it's really just to indicate that the GPU hardware can address and perform texture reads on that size map -- doesn't mean you can actually use it or even *should* if you could.
 
The texture proxy returns that this is possible.
The driver's just lazy. You'll get an allocation error anyway before you even get to create such a texture. ;)
And to think that someone's actually using proxy textures...

The only real change to make here is the 1 MB != 1,000,000 bytes. 1 MB = 2^20 bytes = 1,048,576.
He's perfectly right. You're talking about 1 MiB.
 
I don't normally keep up with stuff, but why is that bad?

And to think that someone's actually using proxy textures...
Thanks.

I keep forgetting what a MB is, it's all the damned commercialisation of MB by ISPs. :(
1GB to them is exactly 1000MB. :?

Thanks SMM:
I found what you mentioned:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009367.html

I find it disturbing that the behaviour of 3D proxy textures have a behaviour different to that of 2D.
Even if this behaviour falls into spec I still feel that it should be changed.

Here's what I'm reading now:
http://wscg.zcu.cz/wscg2006/Papers_2006/Journal/C23-full.pdf
 
I don't normally keep up with stuff, but why is that bad?
It's not bad at all, just very unusual. Most people would just go ahead and create a texture, then check for errors. That's why no-one saw a reason to keep proxy textures in OpenGL ES.

What you're seeing might be an integer overflow bug in the driver.
 
In OpenGL ES I can understand it but personally since I do some very bad things to my programs I need to be able to handle things like out of memory errors.
I'm guessing a fix (assuming what you've said is occurring) is out of the question, assuming the integer calculations are done internally and not on the driver level?

It's not bad at all, just very unusual. Most people would just go ahead and create a texture, then check for errors. That's why no-one saw a reason to keep proxy textures in OpenGL ES.

What you're seeing might be an integer overflow bug in the driver.
 
Everything computer related is powers of 2, and 2^10 = 1024 is the basic unit for binary prefixes.
It is true, what he says, though, that ISPs and disc/disk manufacturers will represent capacities and bandwidths as if 1 G# = 1 billion #. 200 GB harddrives are still only 200 billion bytes (~186 GB). The 6 Mbits/sec bandwidth of your ISP is typically only 6 million bits. While DVDs and HD discs like Bluray all have capacities measured in 1 GB = 1 billion bytes, the exception is CDs where the 700 MB = 700 actual MB (and it's actually more like 703 if you have only one file).
 
It is true, what he says, though, that ISPs and disc/disk manufacturers will represent capacities and bandwidths as if 1 G# = 1 billion #.
True.
But in the case of bandwidth, you could change ISPs to pretty much everybody. Only a few people who get overly exited in their "decimal" to "binary" prefix conversion would use the binary version for bandwidth. Reason for that is that you always use decimal prefixes for frequency, and the big prefix in bandwidth does usually come from a high frequency. So it's usually much more convienient to have bandwidths with decimal prefixes.

But I must agree with Xmas, it's better to use the explicitly binary prefixes instead.
 
In OpenGL ES I can understand it but personally since I do some very bad things to my programs I need to be able to handle things like out of memory errors.
So instead of using a proxy texture and checking its state, you create a normal texture and check the GL error state. It's not really that different.

I'm guessing a fix (assuming what you've said is occurring) is out of the question, assuming the integer calculations are done internally and not on the driver level?
What do you mean, "internally"? I'm talking about the OpenGL driver.
 
That works too.
I'm thinking there maybe a performance difference?
Although I do think that the driver will have to check if the texture would fit anyway before sending it across the bus.

If the driver is having issues being able to handle large numbers for things like proxies I would hope that they use the host CPU and also use a larger type.

So instead of using a proxy texture and checking its state, you create a normal texture and check the GL error state. It's not really that different.


What do you mean, "internally"? I'm talking about the OpenGL driver.
 
If the driver is having issues being able to handle large numbers for things like proxies I would hope that they use the host CPU and also use a larger type.
Err, the driver is running on the host CPU. And integer overflow is just a guess, it could go wrong for any number of reasons.
 
Back
Top