Pixel Shader 2.0 article at Digit-life

digitalwanderer

wandering
Legend
There's a really good article at Digit-life about PS 2.0 that actually explains it in a way that a thicky like me can understand!

I'm actually starting to "get" the 16/24/32FP thing now, GREAT article....a very clear explanation. (Although I'm still pondering the hexadecimal stuff, I got a bit lost there. :rolleyes: )

One thing that I'd like to understand a little better is if I'm reading this bit right:
It's well known that ATi chips use 24 bit floating-point numbers internally in the R300 core and this precision is not influenced by the partial precision modifier. But it's interesting that NVIDIA uses 16 bit floating-point numbers irrespective of the operation precision requested(!), though the partial precision term was introduced by NVIDIA's request, NV3x GPUs support 32 bit floating-point precision under OpenGL NV_fragment_program extension, and NVIDIA advertised their new-generation videochips as capable of TRUE 32bit floating-point rendering!

The NV35 demonstrates various and the most correct behavior among NVIDIA's video chips. We can see that calculations are fulfilled with the 32bit precision in the standard mode in line the with the Microsoft specifications, but when it's indicated that partial precision is supported, temporary and constant registers use 16 bit precision and texture registers use 32 bit precision, though according to the Microsoft specification texture registers can also use 16 bit precision.

Note that the NV3x results were obtained with the WHQL certified drivers, and I'm very sorry that Microsoft does not keep control over implementation of its own DirectX specifications. Also note that the 16 bit floating point numbers format used by NVIDIA is identical to that suggested by John Carmack in 2000.

Does that mean that the latest WHQL compliant nVidia drivers aren't WHQL compliant or am I missing something? :oops:

(My thanks to Lucien1964 for his post that brought this to me attention over at nVnews. :) )

EDITED BITS: Formatting error. :rolleyes:
 
dw i get some strange results with the latest drivers check this out

PixelShader 2.0 precision test. Version 1.3
Copyright (c) 2003 by ReactorCritical / iXBT.com
Questions, bug reports send to: clootie@ixbt.com

Device: RADEON 9500 PRO / 9700
Driver: ati2dvag.dll
Driver version: 6.14.10.6360

Registers precision:
Rxx = s0e3 (temporary registers)
Cxx = s0e3 (constant registers)
Txx = s0e3 (texture coordinates)

Registers precision in partial precision mode:
Rxx = s0e3 (temporary registers)
Cxx = s0e3 (constant registers)
Txx = s0e3 (texture coordinates)
 
my results yeuemaimai...


====================================================================
PixelShader 2.0 precision test. Version 1.3
Copyright (c) 2003 by ReactorCritical / iXBT.com
Questions, bug reports send to: clootie@ixbt.com

Device: ALL-IN-WONDER 9700 SERIES
Driver: ati2dvag.dll
Driver version: 6.14.10.6360

Registers precision:
Rxx = s16e7 (temporary registers)
Cxx = s16e7 (constant registers)
Txx = s16e7 (texture coordinates)

Registers precision in partial precision mode:
Rxx = s16e7 (temporary registers)
Cxx = s16e7 (constant registers)
Txx = s16e7 (texture coordinates)

I dunno why your precision is showing as it is... is it a hacked 9500 series...
 
digitalwanderer said:
Does that mean that the latest WHQL compliant nVidia drivers aren't WHQL compliant or am I missing something? :oops:
Yes, their translations from the Russian are sometimes a tad muddled, but this quote is fairly clear: apparently only the NV35 exhibits WHQL-correct DX9 precisions, using 32-bit when full precision is requested and 16-bit when partial precision is requested. The other NV3x's apear to use 16-bit at all times, meaning they're not (minimum FP24) DX9-compliant, meaning their drivers shouldn't be labelled DX9 WHQL certified for anything but the 5900 series.

Rev and Dave have been hinting that the (eternally) upcoming "Det 50.xx" should fix precision and IQ "for the better," so maybe nVidia is just slow in coming to grips with the "twitchy" nature of the NV3x. I'll believe it when I see it, though.
 
I don't think WHQL means DX9/DDI9 specs were followed at all: it just means the driver/device will function properly in a Windows enviroment & the Certification is OS dependant. I may be wrong, but that is what I understand it to mean.

Try this & the MS links provided: http://www.altsoftware.com/newsletters/whql.pdf

Maybe someone w/actual WHQL Certification experience can explain exactly what is involved. As you can see from that link: there are companies that do the Certification & submit the results to MS. MS isn't doing ALL the Certs. ;)

HTH,
 
Pete said:
Rev and Dave have been hinting that the (eternally) upcoming "Det 50.xx" should fix precision and IQ "for the better," so maybe nVidia is just slow in coming to grips with the "twitchy" nature of the NV3x. I'll believe it when I see it, though.
The 50 series apparently does 32bit exclusively with a NV35. Dunno about the rest of the NV3x and dunno about performance.

Rumored, of course ;)

Evildeus said:
Doesnt the R300 do 32bit PS internally and 24bit output?
According to sireric, all of the R300's pipe computations are done in full 32-bit IEEE floating point, except for the pixel shader core. The positions, matrix transforms, etc... are all vertex shader operations and computed in 32-bit float, or higher precision. The R300 also supports taking 32-bit IEEE3 SPFP per component textures as inputs to the shader, and it also supports outputting in that format. However, the pixel shader core converts the external format to 24-bit format internally in the pixel shader, and convert it back before output. As you may know, up until DX9, pixel shader cores were either non-existant, or limited to 8-bit per component. The colors coming into the pixel shaders in the past were converted down from, typically, 32-bit IEEE per component to 8-bit integer per component. The Parhelia and P10 parts convert it to 10-bit or 12-bit internally. The R300 converts it to a 24-bit FP format.
 
Ahh, thanks Rev. In future Ati can convert the core format to 32bit easily.
The R3xx is one hell of a flexible architecture.
 
K.I.L.E.R said:
Ahh, thanks Rev. In future Ati can convert the core format to 32bit easily.
The R3xx is one hell of a flexible architecture.
Not with the R3xx they can't ("convert the core format to 32bit easily", that is).
 
Reverend said:
K.I.L.E.R said:
Ahh, thanks Rev. In future Ati can convert the core format to 32bit easily.
The R3xx is one hell of a flexible architecture.
Not with the R3xx they can't ("convert the core format to 32bit easily", that is).

I meant in future cores based on the R300.
 
K.I.L.E.R said:
Reverend said:
K.I.L.E.R said:
Ahh, thanks Rev. In future Ati can convert the core format to 32bit easily.
The R3xx is one hell of a flexible architecture.
Not with the R3xx they can't ("convert the core format to 32bit easily", that is).

I meant in future cores based on the R300.
I thought we had pretty much reached the end of the R3xx's life-span, except for mebbe a budget part or two. :?:
 
YeuEmMaiMai said:
I did that already

still the same

all driver settings are DEFAULTS that means application preference........
Try enabling AA/AF and then disabling them again. If that fails, try reinstalling the control panel. I've seen cases where AA/AF got stuck on because the control panel was out of sync with the registry. This can happen when people uses tweakers (not that I claim that you are, just one example).
 
Back
Top