Geforce FX shaders

Nite_Hawk

Veteran
So despite problems (which are being discussed plentifully in other threads), it loos like NV30's shader capabilities should be pretty impressive. I personally only know pretty much what the spec sheets say (longer length, more flexible in some cases). I was just wondering what some of the developers around here think about them.

How much more flexible than ATI (or others) do you think Nvidia's will be? In another thread it was also mentioned that multipass can be used to basically get around any limitations that the 9700pro or other cards might have, but that it's not convenient. Does it seem like CG is going to catch on, or will developers mostly be target DX9? My first thought was that the advanced shaders are going to end up like PS/VS 1.4, but perhaps that won't be the case this time?

Nite_Hawk
 
I'am not a professional developer, so I speak for myself.
The more features a card has, the better for the developer, but in the gaming industry it's more important to focus on the smallest common denominator, which could be minimum dx9 specs, so the game runs on most dx9 cards (nvidia, ati, kyro, s3). Has anyone seen a pixel shader 1.4 game ? No, why? All gf3/gf4/matrox users couldn't play it...

However under OpenGL its a bit different, because a developer writes hardware specific rendering paths. Doom 3 will show it.

Thomas
 
Nvidia's current hardware is no revolution over Ati's architecture but it is more generalized (general purpose) and flexible. The amount is not huge, but it is not tiny either. It is a pity, however, that it will probably never see use in its golden age, and that it took so long to release.
 
"How much more flexible" really depends on individual preferences and priorities of each developer.

Summarily, I'd look at the R300 and NV30 to be nothing more "trial runs" for a concept that will only come to fruition in subsequent generations. Just like before.
 
Reverend said:
"How much more flexible" really depends on individual preferences and priorities of each developer.

Summarily, I'd look at the R300 and NV30 to be nothing more "trial runs" for a concept that will only come to fruition in subsequent generations. Just like before.
Hopefully with the advent of HLSL's, it won't matter much anymore to the programmer which is more flexible. Unfortunately that point in HLSL evolution doesn't seem to have yet been reached, but we seem to be approaching it. The primary benefits from more flexible hardware should mostly come down to performance in the end.
 
Chalnoth said:
Reverend said:
"How much more flexible" really depends on individual preferences and priorities of each developer.

Summarily, I'd look at the R300 and NV30 to be nothing more "trial runs" for a concept that will only come to fruition in subsequent generations. Just like before.
Hopefully with the advent of HLSL's, it won't matter much anymore to the programmer which is more flexible. Unfortunately that point in HLSL evolution doesn't seem to have yet been reached, but we seem to be approaching it. The primary benefits from more flexible hardware should mostly come down to performance in the end.

Yeap I agree. Carmack thinks its either this gen or the next, but we are at the stage of hardware independent languages. Good compilers is desparately needed if this is to work at this generation. Carmack has gone on record saying that his next engine is going to be written in a HLSL, and that in the end, the compiler will just have to 'deal with it'. LOL.
 
tb said:
However under OpenGL its a bit different, because a developer writes hardware specific rendering paths.

Not neccesarily. You do for instance have the GL_ARB_fragment_program extension that should work for all DX9 level cards.
 
JF_Aidan_Pryde said:
but we are at the stage of hardware independent languages.

Am I missing something, or wasn't the idea being OpenGL and DirectX about providing hardware independent languages? :?:
 
Back
Top