Grue said:
Not in all cases--16 bit half precision floats are sufficient in many cases--but there will be some . . . Consider dependent texture lookups.
I still think they'll be enough for color data, though. This is provided, of course, that there are methods of controlling accumulation errors.
Suppose that you have a 4096x1 texture that you're using as a 1-D lookup table. With a 16-bit float, there are only 10 bits for the mantissa, so you can't even address every texel after 1024. Bump up the mantissa to 16 bits. Now you can address all texels provided your texture coordinate doesn't exceed 16 (16 * 4096 = 65536 or 2^16, above which not all integers are representable exactly.)
Would you actually want to use just one component as a lookup into a 2D texture? Yes, you've certainly shown that 16-bits per channel is not enough for textures greater than 1024 (And usually you can expect some error in the tail end of the mantissa...so it probably wouldn't be acceptable for anything past around a 256-size texture).
I would think that for 2D or 3D texture lookups, you'd want to use the additional channels (i.e. use two of the four available floats for a lookup into a 2D texture...and it may be possible to just use 16-bit floats, with two combined for one lookup for additional accuracy).
Anyway, this should make 24-bit floats adequate for lookup tables into 2D or 3D textures, provided you use the different channels for different dimensions (And don't attempt to just use one channel for the entire lookup).
Or, consider a simple pixel shader that takes the x coordinate of the pixel in world space and computes the alpha as sin(x) . . . You'd like this to work without obvious artifacts from roundoff over a 'reasonable' range of world space coordinates. 16 bits and even 24 bits won't get you there.
I'm not entirely sure why this would be much of a problem, as long as the algorithm for computing the sine was accurate enough.
The list of examples goes on, but the idea is that there a lot more places than you think where a lack of numerical precision will bite you, even when the end result is an 8 bit color component.
--Grue
Well, I do know bump mapping is one example where 8-bit integer values are far from adequate...but I don't see why 24-bit floats would be inadequate.
I also know that 32-bit floats, when expressed as decimals, will usually end up with errors past about 4-5 decimal places, which is a few bits closer than the mantissa size would seem to indicate, if you do a few recursive operations on a modern CPU.
But, there's little to no reason why this must be the case for GPUs. GPUs should use higher-precision internal calculation, as well as algorithms to keep errors centered around zero (and not always add). It's certain that today's CPUs do similar things, otherwise there would be noticeable loss in color depth from enabling trilinear filtering, anisotropic filtering, and FSAA.