There's a really good article at Digit-life about PS 2.0 that actually explains it in a way that a thicky like me can understand!
I'm actually starting to "get" the 16/24/32FP thing now, GREAT article....a very clear explanation. (Although I'm still pondering the hexadecimal stuff, I got a bit lost there. )
One thing that I'd like to understand a little better is if I'm reading this bit right:
Does that mean that the latest WHQL compliant nVidia drivers aren't WHQL compliant or am I missing something?
(My thanks to Lucien1964 for his post that brought this to me attention over at nVnews. )
EDITED BITS: Formatting error.
I'm actually starting to "get" the 16/24/32FP thing now, GREAT article....a very clear explanation. (Although I'm still pondering the hexadecimal stuff, I got a bit lost there. )
One thing that I'd like to understand a little better is if I'm reading this bit right:
It's well known that ATi chips use 24 bit floating-point numbers internally in the R300 core and this precision is not influenced by the partial precision modifier. But it's interesting that NVIDIA uses 16 bit floating-point numbers irrespective of the operation precision requested(!), though the partial precision term was introduced by NVIDIA's request, NV3x GPUs support 32 bit floating-point precision under OpenGL NV_fragment_program extension, and NVIDIA advertised their new-generation videochips as capable of TRUE 32bit floating-point rendering!
The NV35 demonstrates various and the most correct behavior among NVIDIA's video chips. We can see that calculations are fulfilled with the 32bit precision in the standard mode in line the with the Microsoft specifications, but when it's indicated that partial precision is supported, temporary and constant registers use 16 bit precision and texture registers use 32 bit precision, though according to the Microsoft specification texture registers can also use 16 bit precision.
Note that the NV3x results were obtained with the WHQL certified drivers, and I'm very sorry that Microsoft does not keep control over implementation of its own DirectX specifications. Also note that the 16 bit floating point numbers format used by NVIDIA is identical to that suggested by John Carmack in 2000.
Does that mean that the latest WHQL compliant nVidia drivers aren't WHQL compliant or am I missing something?
(My thanks to Lucien1964 for his post that brought this to me attention over at nVnews. )
EDITED BITS: Formatting error.