NVidia Cg...now we know where the "Glide" rumors c

Doomtrooper, you can't see how anything from NVIDIA is a good thing. Thankfully the world isn't dependent on your eyes :)
 
Crusher said:
Doomtrooper, you can't see how anything from NVIDIA is a good thing. Thankfully the world isn't dependent on your eyes :)

No I think a single graphics card company should NOT have control of HLSL.. I don't care if its ATI, Matrox..no single graphics card company should be dictating what others will have to do.
With the ARB and the DirectX board at least there is a table with everyones interest is involved and not some single company getting special treatment and optimizations on a game engine because its compiled on a optimized compliler for ONE GPU.
:rolleyes:
 
If it is possible for ATI to write their own back-end for the compiler that will output PS 1.4 programs, then Cg is good.

ATI should not, at any time, have to add their own function calls. The whole point of higher-order languages is to make the hardware more transparent to the programmer (Only in this way can runtime-compiled programs truly benefit from future hardware...).

Here's what I hope will become of Cg:

I hope that Cg becomes a bridge between OpenGL and Direct3D, tying the HLSL's of both together. If nVidia is very successful with Cg (which I hope...and which would require nVidia to be very open with other hardware manufacturers), then it would basically mean that nVidia wouldn't be able to control the direction of Cg. Granted, they would have lots of influence, but other hardware vendors could easily introduce their own features through Direct3D or OpenGL, features that Cg would have to support to remain the standard HLSL.

Granted, there's a lot of what-if's here, but I consider this a potential possibility, and instead of outright complaining about Cg, we should be focussing on the things nVidia needs to do to make Cg truly work.

In a worst-case scenario, in my mind, Cg will simply be a stop-gap developer tool until Direct3D and OpenGL get their own HLSL's up and running.
 
NVIDIA is using different profiles (I dont exactly see how people call the language backwards compatible with a straight face BTW) for different generations of their own hardware ... so why should it be any different for other architectures? Cg is not usefull as a high level shading language at the moment, it automates register allocation and will have slightly more readable control structures and thats about it.
 
Because hardware is not currently flexible enough to do any better. Hopefully the next-generation hardware will indeed be flexible enough for full forwards and backwards compatibility.
 
Back
Top