How Cg favors NVIDIA products (at the expense of others)

Yes, I will accept ARB and MS as "the standard", but I also champion the right of any company to ship any language, toolkit, API, or IDE they want.

Whether someone adopts a standard is dependent on market realities and internal development.
 
If I were NVIDIA, the ARB would not be my priority - I would spend more time and effort working with MS and discuss with the other folks at Meltdowns and other relevant events. I would further assume that all developers are much more interested in the progress of the DX API than OGL. It's a matter of business.

The way I see it, the only existing purpose for the continued evolution of OpenGL is John Carmack. All OpenGL software that matters right now wrt to the latest advancements by all IHVs comes down to what JC wants.

So, really, I personally care more about what happens with DX than OGL.

My opinion of course.
 
Well, there's a whole lot more to graphics programming than just games.

The P10, for instance, was designed pretty much exclusively for the professional market, as are 3Dlabs' proposed changes to OpenGL.
 
Hi DT,
Doomtrooper said:
More HLSL means more confusion for developers, more HLSL means more money spent on training for say 5 different HLSL. Having TWO HLSL to MATCH the TWO API's only makes good sense, more effort can be put towards to the two vs. spreading it out over 5.
Cg appears to be a super-set of DX9-HLSL. That makes effectively two languages, Cg/HLSL and the coming OpenGL2 HLSL, with the possibility that the ARB may adopt Cg or merge it with 3DLab's specs.

Also, Cg outputs to DX8 shaders and DX9 shaders, whereas HLSL only supports DX9. Even if we follow your argument "one HLSL per API", you should embrace Cg on grounds of your statements.

I am still not convinved that Cg has a technical preference for NV-hardware per se. If your graphics board supports DX8 and/or DX9, the Cg->DX output will run. I had a nice discussion with a developer of a major US game developer about "optimising a D3D game strongly for a specific vendor while still making it run on other hardware," and he seemed rather sure that it's nigh impossible to do this while still having a sensible end-product--you program for the API and a feature set, not for a technical implementation by a specific vendor.

Even so, if an IHV rather has Cg support certain specific features or peculiarities of their VGA, they can add own profiles and release own Cg compilers that take advantage of architectural specialities. Dito OpenGL output.

Yes, I, too, would appreciate an open standard rather than something controlled by an IHV. But Cg leaves enough doors open for other companies to adapt the language to their needs. I say: let the developers decide what to use. It's their time and money.

ta,
-Sascha.rb
 
Javascript is evil ( Microsoft has Javascript engine too .. must be evil ) .. Python is much better! Javascript is baaad ! nobody should use javascript ever again!
Pascal ( Delphi is essentially Pascal too ) is evil, because we have C/++ already.. so anybody who writes another Pascal compiler is evil
( no offence, mr. pascal .. its just so happens that C and Pascal are the two programming languages i know, that fill effectively the same purpose )

Why should we have more than one scripting language ? system programming language, application programming language, high-level shading language ?

Silly ? No ?

The point being, _choice is good_
Why should we have more than one OS ?? More confusion:p to end-users ( and those are much easier to confuse than your software developers )
Linux is good but i actually like having Windows around, MS is evil but im not running around shouting "rm -rf \windows"
So, let the Cg live in peace and harmony with other HSLSs under the Sun.

Aye btw, which other HSLSs have open-source compilers publicially available, that actually do produce working code, that runs on at least some of available hardware ?
 
Chalnoth said:
Well, there's a whole lot more to graphics programming than just games.

The P10, for instance, was designed pretty much exclusively for the professional market, as are 3Dlabs' proposed changes to OpenGL.

dont forget bsd based operating systems :D
 
The way I see it, the only existing purpose for the continued evolution of OpenGL is John Carmack. All OpenGL software that matters right now wrt to the latest advancements by all IHVs comes down to what JC wants.

If they want to take this hardware anywhere other than the gaming market, which it appears they do, then they ain't gonna want to do it with DX! Also, if DX9 is stuck at PS/VS2.0 then they are not going to get their flexability exposed through MS's API until the next revision - their best shot at doing so is OpenGL with their NV30 extensions and/or OpenGL2.0.
 
Well, what I think is interesting is that, according to the opinions shared on this messageboard, ATI, Matrox and 3DLabs doesnt need to provide a back-end compiler.

If Cg is producing D3D/OGL "standard" shader code then there is no reason to have any other vendor provide any plug-in or back end piece at all. If this is indeed the case, I'm wondering why the sample back-end is being included in the SDK, and look forward to using Cg to write D3D/OGL code for the R300 the moment I get my hands on one. :)
 
No, there's no need for a backend, which is why developers may yet use it.

However, those backends may not use all of the proprietary features of the various hardware manufacturers, and so more optimal code could be developed if each IHV made their own backend. You can bet that nVidia will make their own backends for each product they put out.
 
Sharkfood said:
If Cg is producing D3D/OGL "standard" shader code then there is no reason to have any other vendor provide any plug-in or back end piece at all. If this is indeed the case, I'm wondering why the sample back-end is being included in the SDK, and look forward to using Cg to write D3D/OGL code for the R300 the moment I get my hands on one. :)
There's a couple of problems with that assumption. First, the current Cg compiler isn't going to be able to take advantage of any of the new DX9 functionality of the R300 (or even the 1.4 pixel shaders of the R200), because it only supports Nvidia's current hardware (GF4).

Second, Cg makes use of hardware profiles that can actually change the language to take advantage of a particular graphics chip. That means that even if you wait for the NV30-optimized version of the compiler, there's no guarantee it will generate code that runs on an R300, because it may contain commands unique to NV30.

If Nvidia does release a generic back-end, then it's reasonable to assume that it wouldn't be optimized for any particular graphics chip. Of course, if you just wait for the DX9 or OGL2.0 HLSLs to be released, than you can be reasonably confident that Nvidia, ATI, 3DLabs, Matrox etc. will provide compilers optimized for their own hardware, and you won't have to worry about modifying your code.
 
[quote="GraphixViolence]There's a couple of problems with that assumption. First, the current Cg compiler isn't going to be able to take advantage of any of the new DX9 functionality of the R300 (or even the 1.4 pixel shaders of the R200), because it only supports Nvidia's current hardware (GF4)[/quote]

surely that's purely because DX9 isnt out yet, DX8 standard Cg code will run fine on R300, RV250 & R200 as all are beyond Gf4 DX feature set.
 
Second, Cg makes use of hardware profiles that can actually change the language to take advantage of a particular graphics chip. That means that even if you wait for the NV30-optimized version of the compiler, there's no guarantee it will generate code that runs on an R300, because it may contain commands unique to NV30.[/quote]


So, essentially what you end up with is a binary that only works with the hardware GPU it was compiled for. Lets also make the assumtion that a NV30 binary wouldn't work on a NV20 card.

Hands up who thinks thats a really bad idea.


Possible scenario
#1 BitBoys release a card *gasp*
#2 80% of b3d rush out and buy it :p
#3 said card wont run games because theres no binaries that work on it
#4 BitBoys go under
 
µße®LørÃ￾ said:
So, essentially what you end up with is a binary that only works with the hardware GPU it was compiled for. Lets also make the assumtion that a NV30 binary wouldn't work on a NV20 card.

Cg's main purpose is to be compiled at runtime, meaning that not only can updated drivers (updated compiler...I would assume that this would often be included in drivers) improve performance, but future hardware may be able to compile certain shaders more efficiencly.

In other words, you don't just compile a program with Cg. You leave the uncompiled code in the program, and compile it at runtime to a specific profile. I'd have to look more into how this will be done, but here's an ideal scenario for a future version of Cg:

Drivers (for DX or GL) set default profile for current hardware.
Cg program compiles most optimal code available for current hardware.
Shader runs.

Currently, I believe the profile has to be set at compile time. What this means is that a game would either automatically detect the video card and choose an appropriate profile (or a fallback profile, such as PS 1.1 or something), or the profile target would be user-selectable.
 
µße®LørÃ￾ said:
Second, Cg makes use of hardware profiles that can actually change the language to take advantage of a particular graphics chip. That means that even if you wait for the NV30-optimized version of the compiler, there's no guarantee it will generate code that runs on an R300, because it may contain commands unique to NV30.


So, essentially what you end up with is a binary that only works with the hardware GPU it was compiled for. Lets also make the assumtion that a NV30 binary wouldn't work on a NV20 card.

Hands up who thinks thats a really bad idea.


Possible scenario
#1 BitBoys release a card *gasp*
#2 80% of b3d rush out and buy it :p
#3 said card wont run games because theres no binaries that work on it
#4 BitBoys go under[/quote]

do you really think in a now more competitive GPU market place, games developers are not going to compile DX rather than IHV optimised binaries and so kill sales? Look at the outcry that Dynamix received when they ignored their existing user base and Tribes 2 sucked on 3dfx cards at launch only a couple of months after 3dfx's demise.

ATI have just announced major OEM deals for the 7200/7500 range, Xabre is making inroads. Devleopers & publishers are not going to kill sales and expect to survive by only running nVidia optimised compilers with Cg.
 
So, essentially what you end up with is a binary that only works with the hardware GPU it was compiled for. Lets also make the assumtion that a NV30 binary wouldn't work on a NV20 card.

Not exactly.
What you end up with is a DX8 or DX9 shader binary which is really bytecode and is compiled to native code by the video driver when the developer calls CreateShader.
FWIW at least some drivers do a lot of work during the Bytecode->Native compilation, including instruction reordering, removal of dead code and register reassignment.
Obviously if you choose to create a DX9 shader binary then it won't work on an NV2X but it would work on any other DX9 card.
Cg also allows you to write multiple versions of the shader in the same file so that you can provide fallback functionality by selecting the appropriate profile at compile time.
Worst case assuming NVidia did provide a specific NV30 profilewhich somehow circumvented the DX9 compiler (though I have no idea how they'd do this), developers would still include a generic DX9, and probably a DX8 fallback which would work on other cards.
 
RussSchultz said:
Ok, put up or shut up. I'm tired of hearing the incessant bleating of "Cg is optimized for NVIDIA hardware" without any proof than little smily faces will eyes that roll upward.

Lets hear some good TECHNICAL arguments as to how Cg is somehow only good for NVIDIA hardware, and is a detriment to others.

Moderators, please use a heavy hand in this thread and immediately delete any posts that are off topic. I don't want this thread turned into NV30 vs. R300, NVNDIA vs. ATI, my penis vs. yours. I want to discuss the merits or de-merits of Cg as it relates to the field as a whole.

So, given that: concisely outline how Cg favors NVIDIA products while putting other products at a disadvantage.



http://www.beyond3d.com/forum/viewtopic.php?t=6864&start=20


Let's look at performances of both shaders with some small tests :

- On a Radeon 9800 Pro your HLSL code is 25% faster than your Cg one.

- On a GeForce FX 5600, your HLSL code is 10% slower than your Cg one.

- On a GeForce FX 5600 with _pp modifier, your HLSL code is 7% faster than your Cg one.

With AA and AF enabled, your HLSL code makes a bigger improvement. It is faster even on a GeForce FX 5600 without the _pp modifier.

Cg seems faster only with GeForce FX and when the bottleneck comes from register usage.

(The Radeon 9800 Pro is 10 X faster than the GeForce FX 5600 )


Radeon 9800 Pro HLSL : 125 MPix/s
Radeon 9800 Pro Cg : 100 MPix/s

GeForce FX 5600 HLSL : 11.2 MPix/s
GeForce FX 5600 Cg : 12.4 MPix/s

GeForce FX 5600 HLSL_pp : 14.8 MPix/s
GeForce FX 5600 Cg_pp : 13.8 MPix/s

GeForce FX 5600 HLSL AA/AF : 7.0 MPix/s
GeForce FX 5600 Cg AA/AF : 6.1 MPix/s

I've been waiting for this day for a long time..there ya go Russ :LOL:
 
The backend is optimized for the GF-FX, and that is a suprise how? It makes a conscious effort, apparently, to minimize register useage at the expense of instructions. Yes, this is optimized for the GF-FX.

How is Cg, the language or the idea, optimized for one platform vs. another?

The answer: it isn't.

The language and the back end are two separate items.
 
RussSchultz said:
How is Cg, the language or the idea, optimized for one platform vs. another?

The answer: it isn't.

The language and the back end are two separate items.
lack of evidence one way does not prove the other.
You must provide some proofs to support your conclusion, just as you demand from others.
 
I'm not going to retread these waters.

14 pages haven't brought up any architectural reasons why the language favors one architecture over another, but keep in mind:

-The language syntax is essentially identical to HLSL.
-The idea of pluggable backends is inherently "fair" to anybody who wants to make a backend.

Those two items lend heavy credence for me that the language and the idea, cannot favor one platform over another on technical terms.
 
Back
Top