Is 24bit FP a recognized standard?

This thread seems silly. If you use 4096-bit floating point precision in your video cards, is THAT a "real standard?" If it's not, does that make it any less tasty?
 
The reason why I asked this is because of a statement NVIDIA made at their Editors day this past October. They said 24bit FP is not a recognized standard.

The way I'm interpreting that is that it’s not a recognized standard by the IEEE.

I just didn't know if it was a recognized standard anywhere else though, like if any high end rendered movie cinematic companies used it or anything.

sorry for the confusion, i probably didn't word the question correctly :)
 
Doesn't have to be a recognized standard by the IEEE to be a recognized standard of one particular section of the industry with devices pertaining to that section, of course.

Meanwhile, there are mentions and uses of 24 bit (and 40 bit, and other "strange-looking" values) peppered all over the tech industry, and IEEE itself. Using the word "standard" can be a bit strange in and of itself, since that brings aspects of market penetration and similar factors into play, since many time "standards" are headbutting all the way until one comes to dominate. And many times devices will use different bit values in different areas, so which ones become "most crucial"?

On nVidia's comments, it seems a bit strange to complain when we haven't remotely seen the full extend of 24-bit yet, their higher-precision implementation comes at massive performance cost, and the entire industry in which they are releasing those products has adopted 24-bits as a "standard" if anything is, and was a known factor from long before.
 
Back in the day when we all thought that 1mb on your videocard was pretty rocking and (if I recall correctly) Matrox Milleniums ruled the "high performance" world, 24-bit color was called "True Color" for 2d (16-bit was called "High Color"). Then, and I never quite caught why, it got promoted to 32-bit color, tho I think there was a time (maybe still?) where the 24-bit was still the output and the 32-bit was the internal palette.

Or something like that.
 
geo said:
Back in the day when we all thought that 1mb on your videocard was pretty rocking and (if I recall correctly) Matrox Milleniums ruled the "high performance" world, 24-bit color was called "True Color" for 2d (16-bit was called "High Color"). Then, and I never quite caught why, it got promoted to 32-bit color, tho I think there was a time (maybe still?) where the 24-bit was still the output and the 32-bit was the internal palette.

Or something like that.

Oh. Scanners went thru the same procession, and are up to 48bit now. I would imagine 48bit would be subject to the same criticisms as 24bit. The scanner people have also used 36bit and 42bit. I would think the scanner industry people would be a particularly good comparison for the vid card industry people as it is much the same considerations (better image quality) that is driving the ratcheting up of the number, and clearly the scanner people have decided that the fractional increase in bits is more important than the "power of 2" fetish/aesthetic/efficiency arguments.

EDIT: More on scanners.
 
If nvidia had their way they would do away with D3D and Microsoft alltogether and be their Own standard which they could then gleefully beat everyone into submission or oblivion with.

It is fortunate for the entire game playing world that M$ is as big and powerfull as they are or Nvidia would "proprietary-up" so big that Glide would look like a kids after school project.
 
24bit floating point is the standard minimum requirement for DirectX 9 hardware compliancy.

IEEE-32 (32bit floating point) is another standard.

There are many recognized "standards". It depends on where you stand.

Brent, what "standard" are you looking for to paste onto your reviews and articles?
 
Reverend said:
24bit floating point is the standard minimum requirement for DirectX 9 hardware compliancy.

IEEE-32 (32bit floating point) is another standard.

There are many recognized "standards". It depends on where you stand.

Brent, what "standard" are you looking for to paste onto your reviews and articles?

I wasn't looking for anything to paste onto reviews, just for my own knowledge. I was just curious if 24bit FP in video cards was a recognized standard after hearing what NVIDIA said about it.

I think Ostsol answered it for me

Well, the IEEE doesn't have a specification for it. . .

But from what I understand its the minimum spec for full precision in DX9, right?

And FP 16 is partial precision?
 
Well. The IEEE has a standard for 32 bit floating point (the 754). However, I seriously doubt nVidia implements this standard.

It dictates support for a number of special cases that has zero use in a pixel shader, such as a value of infinity (well you _might_ want this one), denormalized values and NaN (Not a Number). That and different rounding modes.

I would imagine that the only part of the IEEE-754 that NV uses is the layout of the bits in a number. Which incidentally is what Microsoft dictates as the storage format (16 and 32 bit FP framebuffer support)

Cheers
Gubbi
 
I wasn't looking for anything to paste onto reviews, just for my own knowledge. I was just curious if 24bit FP in video cards was a recognized standard after hearing what NVIDIA said about it.

So what NVidia is saying is basically, "ATI's 24-bit FP might work fine without any problems, but because it isn't an IEEE standard it's evil".

Perhaps ATI should start saying, "NVidia's 16-bit FP might work fine, but because it isn't an IEEE standard it's evil"?

This original statement from NVidia about 24-bit FP not being a "standard" is the worst type of FUD as it is seeking to cast doubt on the capabilities of ATI cards where there is none. 16-bit FP is not an IEEE standard either so why didn't they mention this also?
 
Back
Top