Basic said:
OpenGL guy was quite clear, it's a 24bit floating point format, not 32bit.
Well, ATIs own educational on the 9700 specify 32-bits for the vertex engine. They call the the rendering pipes "128-bit" but when they get down to the gritty of it they specify 96 bits per pixel, which I would have taken as 32 bits per RGB component.
Furthermore, there is that IEEE reference, although I admit that it could have referred to the vertex engine alone. My mind haven't kept a hammerlock on that.
And of course there is always that ambiguity when talking about the
precision of a floating point number - do you specify the size of the full floating point format, or of the mantissa alone, since that is the only part that matters as far as accuracy and error propagation goes? I myself and at least one other (fail to remember who) have specifically asked for information on how the bits are actually allocated - there was never any answer.
So if an anonymous source on a bbs such as this says "it's 24 bits" without going further into it, how should that be interpreted? Is it even necessarily accurate, regardless of good intentions? For my own sake, I don't depend on it either way, nor can I see that it makes any practical difference, although it's always possible to construct examples to the contrary. But if I actually _needed_ to know, anecdotal evidence just doesn't cut it either way.
And yes, I'm a bit miffed that the actual floating point format was never given.
But the reason could simply have been that nobody actually knew exactly. And if that was the case...
Entropy