High versus low colour precision shading in reality?

g__day

Regular
Who has seen examples of high versus low colour precision shading that has made you sit up and say - I don't care how - we simply have to have this effect - just figure out a way to do it! Is high versus low colour precision shading a must have or simply a marketing strategy to sell more product?
 
car1.jpg


Just look at the better reflections in the right part of the screen. Looks way better imho. No artifacts in the reflections. This is even more obvious when it's fullscreen of course.
 
High-precision processing is a must for certain applications. For example, any sort of texture addressing must be done at at least 24-bit accuracy (for FP) for proper texturing (nVidia claims that 32-bit is required, but I'm not certain their claim has much application to games).

I would tend to expect, however, that the vast majority of color operations will not benefit from any higher than 16-bit FP accuracy. However, it will all depend on the situation.

But I will say that there will be a marked difference between 16-bit FP and 12-bit integer (despite the fact that they each have the same accuracy, 16-bit FP has a higher dynamic range, meaning it can properly calculate darker and brighter colors). There is a simple situation where 12-bit int falls far short of all of the FP color formats: specular lighting. If you want shiny surfaces to look right, they'd better be calculated in floating-point color.

Regardless, these aren't the old days of 16-bit vs. 32-bit color. Back then it was obvious: when too little color depth was available, there was dithering or banding. Today, just because of the fact that so many other types of values can be stored and calculated in the video card, the differences won't be so cut and dry. The exact nature of the differences will be different depending on the shader, and, therefore, won't always be as easy to spot (it's easy to see the differences if you know what to look for).

Update:
One last thing. I still think nVidia was doing the right thing about including different precisions in their NV3x line. However, their implementations definitely do have their flaws. Fortunately, the primary flaw (lots of 12-bit int calcs required for optimal performance) appears to have been solved for the NV35 (and presumably higher) chips.

By offering multiple precisions, game developers won't need to use the highest precision all the time. They can drop down to lower precisions for dramatic performance improvements when the extra precision isn't needed (and, I believe, it often will not be needed).
 
By the way, the pic that Hiostu put up illustrates what can be obtained from using higher-quality source textures (specifically the bump map in this case). Bump maps really need much more than 8 bits per component (I think the one on the right uses 16 bits per component, but I'm not certain). You can see the differences in the fact that the image on the left makes the car look almost as if it was made up of lots of little facets, not the smoothly-curving hood on the right.
 
In the presentations from ATI regarding their demos, they mentioned that even 11 or 12 bit wasn't enough (though it would obviously be better than 8 bit). So it must be 16-bit integer or 32-bit float. I think it's the former.

I think we'll really start needing higher precision once we start doing vertex calculations in the pixel shader, such as with PS/VS 3.0 or with the proposed render to vertex buffer / uber buffer ideas that are in the works. If you started doing physics with the pixel shader 16-bit float could very easily run into problems.

Chalnoth is right in that the way games are using pixel shaders right now (and even in the short term future), precision is not much of an issue once you get to 16-bit.
 
Mintmaster said:
In the presentations from ATI regarding their demos, they mentioned that even 11 or 12 bit wasn't enough (though it would obviously be better than 8 bit). So it must be 16-bit integer or 32-bit float. I think it's the former.
By the way, there is no 12-bit storage format. I'm pretty sure the high-detail bumps shown were in 16-bit integer.

On the R300, all pixel shader calcs are done at 24-bit FP (Well, some may be done at higher precision, but the intermediate values are always stored at 24-bit), so it's the storage format that is important here.

I'm not entirely certain how well 16-bit FP would compare to 16-bit integer in those shots, but I can say that 12-bit int calculations should be enough for those bump map images. That is, 12-bit ints would, with proper programming, make available sixteen times as much accuracy as 8-bit. I'd say that'd be sharp enough for those bump maps, though if any overbright lighting were used on the reflections, some floating-point format would be necessary for that lighting (To make proper use of the 12-bit int format, one would still need to use a 16-bit bump map, as there is no 12-bit storage format, as I previously stated).
 
Any scene that has a large contrast ratio will need it. Such as a scene with very bright and very low light areas, unless of course you like your shadows to have color banding gradients, or to be completely black.
 
Interestingly enough, colour interpolation is limited to 12 bits (per channel, presumably) on the R3xx series.
 
Does the left use int and right 24-bits FP?
IF that is the case i would rather see if there was to be any difference to 16-bit FP and/or 32-bit FP.
 
Ostsol said:
Interestingly enough, colour interpolation is limited to 12 bits (per channel, presumably) on the R3xx series.
Which would make sense for a texture sampler. There wouldn't be much point in higher precision there, since filtering of higher-precision textures is not supported yet.
 
overclocked said:
Does the left use int and right 24-bits FP?
IF that is the case i would rather see if there was to be any difference to 16-bit FP and/or 32-bit FP.
Remember that the R3xx doesn't have any option for selecting different processing. The images above are all about the source texture. I'm pretty certain that the difference is 8-bit int vs. 16-bit int in the storage format (for the normal map).
 
Chalnoth said:
Ostsol said:
Interestingly enough, colour interpolation is limited to 12 bits (per channel, presumably) on the R3xx series.
Which would make sense for a texture sampler. There wouldn't be much point in higher precision there, since filtering of higher-precision textures is not supported yet.
Colour interpolation, not texture filtering; the interpolation between the colours assigned to vertices. Of course, a higher precision can easily be forced by sending the colour to the fragment shader as a texture coordinate.
Chalnoth said:
overclocked said:
Does the left use int and right 24-bits FP?
IF that is the case i would rather see if there was to be any difference to 16-bit FP and/or 32-bit FP.
Remember that the R3xx doesn't have any option for selecting different processing. The images above are all about the source texture. I'm pretty certain that the difference is 8-bit int vs. 16-bit int in the storage format (for the normal map).
AFAIK, there's no such thing as a 16-bit integer format for textures. The high precision textures (16 and 32 bit) are all floating point formats.
 
Ostsol said:
AFAIK, there's no such thing as a 16-bit integer format for textures. The high precision textures (16 and 32 bit) are all floating point formats.

Well, you are quite wrong:

D3DFMT_A16B16G16R16 36 64-bit pixel format using 16 bits for each component

Vs

D3DFMT_A16B16G16R16F 113 64-bit float format using 16 bits for the each channel (alpha, blue, green, red).
 
Chalnoth said:
Ostsol said:
Interestingly enough, colour interpolation is limited to 12 bits (per channel, presumably) on the R3xx series.
Which would make sense for a texture sampler. There wouldn't be much point in higher precision there, since filtering of higher-precision textures is not supported yet.

16 bit fixed point textures can be filtered on the R300.
There's a penalty (twice the cycles, ie 2 cycles for simple bilinear), but it can be used.
 
Colourless said:
Ostsol said:
AFAIK, there's no such thing as a 16-bit integer format for textures. The high precision textures (16 and 32 bit) are all floating point formats.

Well, you are quite wrong:

D3DFMT_A16B16G16R16 36 64-bit pixel format using 16 bits for each component

Vs

D3DFMT_A16B16G16R16F 113 64-bit float format using 16 bits for the each channel (alpha, blue, green, red).
Ah. . . point conceded! I was looking in the wrong place. . .
 
Back
Top