Cg appears to be a super-set of DX9-HLSL. That makes effectively two languages, Cg/HLSL and the coming OpenGL2 HLSL, with the possibility that the ARB may adopt Cg or merge it with 3DLab's specs.Doomtrooper said:More HLSL means more confusion for developers, more HLSL means more money spent on training for say 5 different HLSL. Having TWO HLSL to MATCH the TWO API's only makes good sense, more effort can be put towards to the two vs. spreading it out over 5.
Chalnoth said:Well, there's a whole lot more to graphics programming than just games.
The P10, for instance, was designed pretty much exclusively for the professional market, as are 3Dlabs' proposed changes to OpenGL.
The way I see it, the only existing purpose for the continued evolution of OpenGL is John Carmack. All OpenGL software that matters right now wrt to the latest advancements by all IHVs comes down to what JC wants.
There's a couple of problems with that assumption. First, the current Cg compiler isn't going to be able to take advantage of any of the new DX9 functionality of the R300 (or even the 1.4 pixel shaders of the R200), because it only supports Nvidia's current hardware (GF4).Sharkfood said:If Cg is producing D3D/OGL "standard" shader code then there is no reason to have any other vendor provide any plug-in or back end piece at all. If this is indeed the case, I'm wondering why the sample back-end is being included in the SDK, and look forward to using Cg to write D3D/OGL code for the R300 the moment I get my hands on one.
µße®Lørà said:So, essentially what you end up with is a binary that only works with the hardware GPU it was compiled for. Lets also make the assumtion that a NV30 binary wouldn't work on a NV20 card.
µße®Lørà said:Second, Cg makes use of hardware profiles that can actually change the language to take advantage of a particular graphics chip. That means that even if you wait for the NV30-optimized version of the compiler, there's no guarantee it will generate code that runs on an R300, because it may contain commands unique to NV30.
So, essentially what you end up with is a binary that only works with the hardware GPU it was compiled for. Lets also make the assumtion that a NV30 binary wouldn't work on a NV20 card.
RussSchultz said:Ok, put up or shut up. I'm tired of hearing the incessant bleating of "Cg is optimized for NVIDIA hardware" without any proof than little smily faces will eyes that roll upward.
Lets hear some good TECHNICAL arguments as to how Cg is somehow only good for NVIDIA hardware, and is a detriment to others.
Moderators, please use a heavy hand in this thread and immediately delete any posts that are off topic. I don't want this thread turned into NV30 vs. R300, NVNDIA vs. ATI, my penis vs. yours. I want to discuss the merits or de-merits of Cg as it relates to the field as a whole.
So, given that: concisely outline how Cg favors NVIDIA products while putting other products at a disadvantage.
Let's look at performances of both shaders with some small tests :
- On a Radeon 9800 Pro your HLSL code is 25% faster than your Cg one.
- On a GeForce FX 5600, your HLSL code is 10% slower than your Cg one.
- On a GeForce FX 5600 with _pp modifier, your HLSL code is 7% faster than your Cg one.
With AA and AF enabled, your HLSL code makes a bigger improvement. It is faster even on a GeForce FX 5600 without the _pp modifier.
Cg seems faster only with GeForce FX and when the bottleneck comes from register usage.
(The Radeon 9800 Pro is 10 X faster than the GeForce FX 5600 )
Radeon 9800 Pro HLSL : 125 MPix/s
Radeon 9800 Pro Cg : 100 MPix/s
GeForce FX 5600 HLSL : 11.2 MPix/s
GeForce FX 5600 Cg : 12.4 MPix/s
GeForce FX 5600 HLSL_pp : 14.8 MPix/s
GeForce FX 5600 Cg_pp : 13.8 MPix/s
GeForce FX 5600 HLSL AA/AF : 7.0 MPix/s
GeForce FX 5600 Cg AA/AF : 6.1 MPix/s
lack of evidence one way does not prove the other.RussSchultz said:How is Cg, the language or the idea, optimized for one platform vs. another?
The answer: it isn't.
The language and the back end are two separate items.