Consensus on NV3x Floating-Point Pixel Shaders?

Which is closest to your opinion about NV3x FP pixel shaders, specifically relative to R3xx?

  • B) Apps that still show shaders slow are broken, or will be fixed by driver improvements

    Votes: 0 0.0%
  • C) Slow due to architecture even on NV35, NVidia is cheating benchmarks to cover it up

    Votes: 0 0.0%
  • D) Doesn't matter, since real games will be developed with NV3x shader architecture in mind

    Votes: 0 0.0%
  • E) Doesn't matter, since all current cards will be obsolete when FP-shader dependent games appear

    Votes: 0 0.0%

  • Total voters
    201
Xmas said:
Tridam said:
In Cg specifications, NVIDIA says clearly that Cg can use FX12 in ps_2_0 and in ps_2_x but that HLSL can't.

Cg Language Specifications said:
half, fixed, and double data types are treated as float.
half data types can be used to specify partial precision hint for pixel shader
instructions.

"ps_2_0 and ps_2_x :

float/double : 24 bit floating point (minimum)
int : floating point clamped to integers
half : 16 bit floating point (minimum)
fixed : depends on compiler settings

Although the Cg compiler (cgc) and runtime support the fixed data type (and vector versions such as fixed3 and fixed4), Microsoft's HLSL compiler (fxc) does not."

I think that with DX9 GeForce FX with Cg shader on runtime (not asm) fully support FX12. With Cg on runtime, NVIDIA can pass through ps_2_0 and ps_2_x rules.
 
Tridam said:
"ps_2_0 and ps_2_x :

float/double : 24 bit floating point (minimum)
int : floating point clamped to integers
half : 16 bit floating point (minimum)
fixed : depends on compiler settings

Although the Cg compiler (cgc) and runtime support the fixed data type (and vector versions such as fixed3 and fixed4), Microsoft's HLSL compiler (fxc) does not."

I think that with DX9 GeForce FX with Cg shader on runtime (not asm) fully support FX12. With Cg on runtime, NVIDIA can pass through ps_2_0 and ps_2_x rules.

Suddenly it becomes crystal clear (to me) why ATI refuses to develop a runtime Cg compiler. Without 3rd-party support, games aren't going to use runtime-compiled Cg, so Nvidia loses their chance at evading DX9's specs when it comes to shader precision. Or could a game easily use runtime Cg on Nvidia cards but pre-compiled (or hand-coded) PS2.0 for ATI et. al.?

Incidentally, re: TWIMTBP, DeanoC's comments (read the whole thread) indicate that in his experience at least (as lead coder on TWIMTBP game (Silent Hill 2), so he knows more than any of the rest of us), TWIMTBP is entirely a cross-marketing deal negotiated with the publisher, and not at all a code-optimization deal having anything to do with the developer. He states that he was under absolutely no pressure to make his game run better on Nvidia cards, much less worse on competing cards, and that from his perspective being a TWIMTBP game didn't seem to change his devrel relationship with Nvidia one bit.

Of course SL2 is a DX8 game, and DX8 matches much more nicely with Nvidia hardware than DX9 does. Not to mention Nvidia seeming to be a lot more desperate now than it was then. So it's possible Nvidia might start to use TWIMTBP as leverage to influence developers to support Cg, FX12, etc. But (STALKER aside) we don't have any evidence that that's the case, and we do have impeccable evidence that that's not how the program used to work. So I don't think it's fair to assume anything underhanded just yet.
 
Dave H said:
Incidentally, re: TWIMTBP, DeanoC's comments (read the whole thread) indicate that in his experience at least (as lead coder on TWIMTBP game (Silent Hill 2), so he knows more than any of the rest of us), TWIMTBP is entirely a cross-marketing deal negotiated with the publisher, and not at all a code-optimization deal having anything to do with the developer. He states that he was under absolutely no pressure to make his game run better on Nvidia cards, much less worse on competing cards, and that from his perspective being a TWIMTBP game didn't seem to change his devrel relationship with Nvidia one bit.

Thats not always the case, especially on shader heavy titles - how much use of shaders does SH2 make?

At the D2D conference they did explain that their dev rel will drop in shader code optimised for NV hardware.
 
Tridam said:
"ps_2_0 and ps_2_x :

float/double : 24 bit floating point (minimum)
int : floating point clamped to integers
half : 16 bit floating point (minimum)
fixed : depends on compiler settings

Although the Cg compiler (cgc) and runtime support the fixed data type (and vector versions such as fixed3 and fixed4), Microsoft's HLSL compiler (fxc) does not."
Where's that from?

What I quoted can be found here on page 203.

I think that with DX9 GeForce FX with Cg shader on runtime (not asm) fully support FX12. With Cg on runtime, NVIDIA can pass through ps_2_0 and ps_2_x rules.
I dug a bit into the Cg toolkit and it seems as it could really support FX12. However this means using the Cg runtime could yield different results than using precompiled shaders, which is something I would really hate. Maybe a little test could provide certainty.
 
Dave H said:
Suddenly it becomes crystal clear (to me) why ATI refuses to develop a runtime Cg compiler. Without 3rd-party support, games aren't going to use runtime-compiled Cg, so Nvidia loses their chance at evading DX9's specs when it comes to shader precision. Or could a game easily use runtime Cg on Nvidia cards but pre-compiled (or hand-coded) PS2.0 for ATI et. al.?
A game could easily use runtime Cg on cards from any vendor.
 
I know I'm late to the party but
" I here by declare my Intel Pro Management 10/100 network adapter a DX 9 compatible graphics card. "!
 
indio said:
I know I'm late to the party but
" I here by declare my Intel Pro Management 10/100 network adapter a DX 9 compatible graphics card. "!

Heh, err I don't know about that one but.... I figure if Nvidia can declare a DX7 card like the Geforce 4 MX is, a DX8 card I ought to be able to at least declare my old Geforce DDR DX8 ..... oh wait, I have DX9 installed on my computer. I see how they do it now. ;)
 
Xmas said:
I think that with DX9 GeForce FX with Cg shader on runtime (not asm) fully support FX12. With Cg on runtime, NVIDIA can pass through ps_2_0 and ps_2_x rules.
I dug a bit into the Cg toolkit and it seems as it could really support FX12. However this means using the Cg runtime could yield different results than using precompiled shaders, which is something I would really hate. Maybe a little test could provide certainty.
That would be....strange. Either way, Cg does compile to the assembly in the currently-used API. It may be non-trivial to get DirectX 9 to work with FX12 with runtime Cg.

But I certainly wouldn't oppose it. I've said for a long time that I want to see runtime compiling. This will free hardware vendors from having to create an architecture that is tied down by previous architectures, and reduce the need for conformity (without increasing developer headaches....much....there will always be compiler issues). If runtime compiling ends up being faster, so much the better. Depending on how Cg and future architectures evolve, it may allow future architectures to run even faster on games released in the near future using runtime Cg (from any vendor).

Also keep in mind that this is, apparently, the direction that OpenGL 2.0 is going, only it will be even more extreme. From what I've read, the shader will always compile at runtime, and always directly to machine language by the vendor's drivers (there may be a standard compiler to assembly for compatibility with a wider range of architectures, but I don't really know). I think that this will definitely be a good thing, and will allow for much greater compiler optimizations.
 
Sabastian said:
indio said:
I know I'm late to the party but
" I here by declare my Intel Pro Management 10/100 network adapter a DX 9 compatible graphics card. "!

Heh, err I don't know about that one but.... I figure if Nvidia can declare a DX7 card like the Geforce 4 MX is, a DX8 card I ought to be able to at least declare my old Geforce DDR DX8 ..... oh wait, I have DX9 installed on my computer. I see how they do it now. ;)

hey why not ? it's not doing fp24 either ..... actually it's not doing anything it's just rendering video metaphysically faster than I can see it on my monitor. I just ran another test and got 10^10000 Nvmarks. I can't see it so I'm not cheating. :rolleyes:
 
Chalnoth said:
But I certainly wouldn't oppose it. I've said for a long time that I want to see runtime compiling. This will free hardware vendors from having to create an architecture that is tied down by previous architectures, and reduce the need for conformity (without increasing developer headaches....much....there will always be compiler issues). If runtime compiling ends up being faster, so much the better. Depending on how Cg and future architectures evolve, it may allow future architectures to run even faster on games released in the near future using runtime Cg (from any vendor).

I don't think so :LOL:

Also keep in mind that this is, apparently, the direction that OpenGL 2.0 is going, only it will be even more extreme.


Difference is that the HLSL is not controlled by a IHV, some day you will wake up and that little light bulb will come on above your head.
 
Slow due to architecture even on NV35, NVidia is cheating benchmarks to cover it up

but

it doesn't matter, since real games will be developed with NV3x shader architecture in mind

at least for now
 
tEd said:
Slow due to architecture even on NV35, NVidia is cheating benchmarks to cover it up

but

it doesn't matter, since real games will be developed with NV3x shader architecture in mind

at least for now

You really believe that games will be coded for NV3x in mind? I don't think there is a whole lot of market penetration for the NV3x. I suspect there are significantly (read many times as many) more installed r3xx's at the moment than there are NV3x chips. Unless Nvidia is a 'buying' a lot of software developers, I can't really see them lining up to code the Nv3x path.

I guess if the only way to make the game playable on the low end nvidia cards is to specifically code nv3x path and be thankful that the r3xx defaulting to fp24 is fast enough for them to play. If so does that mean we will never see a truly DX9 title?
 
Chalnoth said:
Also keep in mind that this is, apparently, the direction that OpenGL 2.0 is going, only it will be even more extreme. From what I've read, the shader will always compile at runtime, and always directly to machine language by the vendor's drivers (there may be a standard compiler to assembly for compatibility with a wider range of architectures, but I don't really know). I think that this will definitely be a good thing, and will allow for much greater compiler optimizations.

What information is available to the OpenGL 2.0 runtime compilers when munching OpenGL 2.0 shaders which their DX cousins dont have munching DX assembly? (Which is really an IL.)
 
Xmas said:
Tridam said:
"ps_2_0 and ps_2_x :

float/double : 24 bit floating point (minimum)
int : floating point clamped to integers
half : 16 bit floating point (minimum)
fixed : depends on compiler settings

Although the Cg compiler (cgc) and runtime support the fixed data type (and vector versions such as fixed3 and fixed4), Microsoft's HLSL compiler (fxc) does not."
Where's that from?

It comes from the Cg Tutorial.

Xmas said:
I think that with DX9 GeForce FX with Cg shader on runtime (not asm) fully support FX12. With Cg on runtime, NVIDIA can pass through ps_2_0 and ps_2_x rules.
I dug a bit into the Cg toolkit and it seems as it could really support FX12. However this means using the Cg runtime could yield different results than using precompiled shaders, which is something I would really hate. Maybe a little test could provide certainty.

I don't know what to think about runtime shader compiler. Actually, I think it has many advantages but I also think it could be dangerous :? But I'm sure about one thing : it can't be good if NVIDIA doesn't respect DirectX rules with Cg.

I'm sure about another thing : I don't like the way NVIDIA handles Cg. Improving HLSL would be better. Cg is controled by NVIDIA so its main job is to help NVIDIA gain market shares, not to help 3D rendering evolution.
 
Regarding use of "FX12" and Dx9.

Dx9 itself does not support FX12, it does support partial precision float (_pp modifier will enable 16bit floats). So to use FX12 in Dx CG must either be using a back door into their drivers or be inserting some kind of token into the compiled stream e.g. a special comment format (although I don't think we see comments in the tokens that the driver gets, will have to check). There are also some rules regarding the precission of legacy vertex colours that might allow NV to use FX12, although ultimatley this could lead to accuracy problems.

So, FP16 yes (as long as app asks for it!!), FX12 only by hacking.

John.
 
C) Slow due to architecture even on NV35, NVidia is cheating benchmarks to cover it up

Err, I don't like the 'cheating' word here as we only know that to be the case in 3Dmark03, but I voted this because I'm getting the overall impression right know that nVidia is putting up a brave fight to get those darn int12 in action over FP units at any given opportunity.

Besides, we still don't really know for sure wheather the NV35 really has a much improved FP performance.
 
DaveBaumann said:
At the D2D conference they did explain that their dev rel will drop in shader code optimised for NV hardware.

Was that offer made specifically in relation to TWIMTBP games? I was under the impression ATI/Nvidia dev rel offer to do this sort of thing for all games, or at least those big enough to get their attention.

Or was it not an offer but a threat: "we will drop in our shader code...oh yes...BWAA HAA HAA HAA HAA"?
 
Back
Top