The layman's guide for laymen
FP16, 24, and 32 in the pixel (fragment) shaders has been pretty well covered by the 6800U reviews (and maybe the 3DM05 reviews and B3D's TR:AoD screenshot analysis). The same principles apply to FP and the current FX8 buffers as they do to fragment precision: higher precision means less rounding errors, and as shaders get more complex, rounding errors will be more prevalent (or, at least, easier to see). And the greater range that typically goes hand in hand with the greater precision (although perhaps not with FP10) is pretty important as it relates to FP buffers and HDR (High Dynamic Range). It allows for brighter whites and darker blacks without washing out intermediate colors.
As others have mentioned, just think of it in the same way as we saw the 16-bit vs. Voodoo 3 "22-bit" vs. 32-bit argument play out in the days before 32-bit became universal and shaders took center stage. Higher precision cost more in terms of either transistors or performance, so initially there'll be a trade-off between better performance with lower precision/IQ or worse performance with high precision/IQ. Then manufacturing makes supporting the higher precision mundane, and we move onto the next advancement in GPU capability (and thus higher precision requirements to avoid artifacts).
Perfect, first post on a new page. I hope, in trying to save others some retyping, I'm not leading US too far astray.
Actually, this quote from JC sums it up quite succinctly:
FP16, 24, and 32 in the pixel (fragment) shaders has been pretty well covered by the 6800U reviews (and maybe the 3DM05 reviews and B3D's TR:AoD screenshot analysis). The same principles apply to FP and the current FX8 buffers as they do to fragment precision: higher precision means less rounding errors, and as shaders get more complex, rounding errors will be more prevalent (or, at least, easier to see). And the greater range that typically goes hand in hand with the greater precision (although perhaps not with FP10) is pretty important as it relates to FP buffers and HDR (High Dynamic Range). It allows for brighter whites and darker blacks without washing out intermediate colors.
As others have mentioned, just think of it in the same way as we saw the 16-bit vs. Voodoo 3 "22-bit" vs. 32-bit argument play out in the days before 32-bit became universal and shaders took center stage. Higher precision cost more in terms of either transistors or performance, so initially there'll be a trade-off between better performance with lower precision/IQ or worse performance with high precision/IQ. Then manufacturing makes supporting the higher precision mundane, and we move onto the next advancement in GPU capability (and thus higher precision requirements to avoid artifacts).
Perfect, first post on a new page. I hope, in trying to save others some retyping, I'm not leading US too far astray.
Actually, this quote from JC sums it up quite succinctly:
We're moving from multi-pass textures to multi-pass shaders, and from low-range display buffers to high-range ones.Quinn said:John Carmack said:This wasn�t much of an issue even a year ago, when we were happy to just cover the screen a couple times at a high frame rate, but real-time graphics is moving away from just �putting up wallpaper� to calculating complex illumination equations at each pixel. It is not at all unreasonable to consider having twenty textures contribute to the final value of a pixel. Range and precision matter.
Range and precision is what John Carmack wants. Range and precision is what the GeForceFX delivers.