How Cg favors NVIDIA products (at the expense of others)

Follow up:

http://www.cgshaders.org/articles/interview_nvidia-jul2002.php

Cg has been said to be compatible with DX9's HLSL, but does that imply that Cg is potentially a subset of DX9's HLSL, or are they effectively the same language?

Cg is not a subset of Microsoft's HLSL. NVIDIA and Microsoft have been working to create one language, but since we both started with products created separately, we're both moving toward that compatibility from slightly different directions. After the gold release, we expect that Cg programs and HLSL programs will be fully interchangeable.

So again, what exactly is the point of Cg if both HLSL programs are fully interchangeable? Are MS and nvidia going to maintain two separate HLSLs, but somehow co-manage their evlolution? What's the point?
 
[a phrase some may consider an expletive removed to try and prevent degrading the discussion], stop making labels like "Anti-Cg people", lumping all opposing viewpoints under the heading, and associating all negative aspects observed by any individual with the group in lieu of addressing directly pertinent points presented!!

It is just as useless and unproductive as the fanboy rants.

If someone who isn't pissed off or throwing labels around has the time, I would appreciate answers to my questions and scenario as I think it would actually be more productive than losing your temper and ending discussion. :-?

And if you have a problem with me, please use my monicker so I have a chance to address it just in case, maybe, communication might by some mysterious circumstance resolve a misunderstanding or further a reasonable discussion.
 
m$ wont make the DX9 HLSL platform independent, so someone else has to do it (m$ might also not make the language backward/forward compatible, which again is enough to warrant someone stepping in to correct the matter).
 
Why would the two need to exist if they are compatible / interchangeable, as nVidia has claimed?

For three reasons, which have been given multiple times already:

1) Compatibility and interchangability does not imply inclusivity.

Cg allows NVIDIA to support things that will not be included in the DirectX API or DX HLSL.

- DX HLSL is not available.

Cg is the only DirectX shading language developers have at their disposal right now.

- DX HLSL does not support OpenGL.

Cg is the only OpenGL shading language developers have at their disposal right now.

Asking what good Cg is when HLSL will be available eventually is similar to asking a person who has no vehicle and lives 10 miles from their job why they're buying a car now, when they can buy a similar car in a few months. Do you think they should walk to work while they wait for the new model year?
 
MfA said:
m$ wont make the DX9 HLSL platform independent, so someone else has to do it (m$ might also not make the language backward/forward compatible, which again is enough to warrant someone stepping in to correct the matter).
Please Mfa could you clarify more?
Do you mean software platform (DX,OpenGL) ?
What could be a possible solution?
 
I find these "who needs another HLSL" comments simply ridiculous. Last time I checked, graphics industry was an open market. Let the market (in this case, developers who these applications are targeted at) decide how many HLSLs we need. Why is that that we only need two, as some people would like to believe? Maybe we need only one. Maybe we don't need any. Whatever any of you think, unless you are a developer it will have no bearing on the outcome. It is up to developers to try these tools and pick the one that best suits their needs. If there is indeed no need for CG, as some suggest, it will die a quick a painless death. If it becomes widely adopted, then Nvidia did something right. But its not for you to decide how many languages we need.

BTW, who more then two IHVs? We already have ATi and Nvidia, so there is really no need for more, since their products are pretty similar to the aforementioned two anyway. Multiple IHVs are bad for developers, since they have to spend time supporting them as well, instead of focusing on the top two. :rolleyes:
 
Joe DeFuria said:
Follow up:
So again, what exactly is the point of Cg if both HLSL programs are fully interchangeable? Are MS and nvidia going to maintain two separate HLSLs, but somehow co-manage their evlolution? What's the point?

What's the point of having GNU C++ and Visual C++? Both are the same base language, just different tools. Maybe only one ultra-super-duper C++ compiler should exist, open-sourced and controlled by ISO/ANSI?!?!?! That's crazy, just as it is crazy to only have a single HLSL compiler that only supports a single API. (OGL and DX9 have separate shading languages)


Cg's language might be the same as DX9 HLSL, but DX9 HLSL won't produce code that works under OpenGL and it isn't neccessarily going to produce OPTIMAL vertex and pixel shaders for all platforms the same way Intel and AMD both have their own C compilers separate from Microsoft.

But neither Microsoft's nor Intel's nor AMD's C compilers are cross-compilers. GNU's C compiler however, can produce code for IA-32 and about 20 other microprocessors. Microsoft's compiler can crosscompile for Intel and AMD optimizations, but Intel and AMD (the equivalent of NVidia and ATI) produce C compilers optimized for their chips.


I have reiterated this OVER and OVER again in these forums to fall on deaf ears. Cg and DX9 HLSL will probably be the same exact programming language, except that NVidia's shipped Cg compiler will probably produce more optimal code for NVidia cards than Microsoft' generic compiler, and Cg's compiler will produce code that works under OpenGL as well.


It's the failure to divorce Cg (the language) from Cg (the compiler tool) from Cg (the runtime utility library) that causes confusion among the ranters in this forum. Nvidia doesn't have to "own" the Cg language because there is nothing in that language that is specific to the NVidia hardware. It is a generic scripting language with normal range of conditionals, loops, and other constructs plus first class vector datatypes. There is no reason this language needs to be "owned" by NVidia or "evolved". The language syntax provides all you need. The area of evolution will be in what the *RUNTIME* provides, like the builtin functions (noise() for example)


The are numerous reasons besides cross-compilation for OpenGL and DX9 that probably forced NVidia to design a parallel implementation of the evolving DX9 HLSL. For example, the UNKNOWN AVAILABILITY DATE OF DX9 HLSL and OpenGL2.0. Therefore, Nvidia chose to produce a compiler based on Microsoft's language that could target both DX9 and OpenGL, giving them working HLSL before DX9 / OGL2.0 is "shipped" (In OGL2.0's case, it's even farther off)

Another motivation was to address legacy DX8 hardware, namely the GF3 and GF4, in a way that doesn't kill performance with emulation. (Solution: subsetting profiles) This move made Cg useful for a large number of developers at the moment with DX8 class hardware, but for whom, vertex and pixel shading assembly language is an annoyance.



I can write an NVidia Cg-like language using BNF in probably 15 minutes. Inventing this stuff is trivial. The language is not hardware specific, it's the compiler and runtime where all the work is.
 
It stuns me that so many of you cant see this for what it is...

Nv30 goes beyond DX9 spec... way beyond.. Developer uses CG on Nvidia based development platform.. Writes nifty shader routines that looks and runs great on Nv30.. developer tests other Dx9 complient cards ATi, SIS, S3, 3Dlabs... Uh oh... Those cars do run the shader but have a lot harder time doing it... they have to multipass etc.. Yet.. they ARE THE CURRENT STANDARD. Benchamarks demos follow.. Comments in forums, finger updates, interviews Etc are made..

"well The HQ shader effects for Dx9 complient cards were written in Cg with the NV30 as the base... and are quite complex. The rest of the DX9 cards will run them but much slower... for best performance get an Nvidia card.."

Why is this scinario that WILL HAPPEN so hard fro you peolpe to grasp?? How is this fair, or good for the rest of the industry??? Who followed the DX9 spec and are now going to pay for it?? it also means that the further Functionality of Cg wil be dependant on the new generation of Nvidia hardware and their whims. Putting ALL other hardware makers at a disadvantage..

This is why a common Shader language HHHHHHAAAAAAASSSSSS to be developed and controled by an outside NEUTRAL party.
 
HellBinder, why don't you post examples of where Cg (the language) goes far beyond DX9 HLSL such that it would be very slow on ATI cards?

If you cannot produce a single example of how the public Cg language (which you can grab the language spec off the web) goes way beyond DX9 HLSL such that it would run like crap on ATI but great on NV30, then I expect you to retract these assertions. As far as I know, DX9 HLSL can be used to write arbitrary length shaders as well, includes loop constructs, and everything Cg does, WHICH WOULD BE NATURAL SINCE THEY ARE BASED ON THE C-LANGUAGE.

Have you ACTUALLY READ THE DAMN SPECIFICATION or are you pulling this stuff out of thin air?


Jesus christ, if I was making a similar negative argument about some technical feature, and I had the spec available, I would include EXAMPLES in my post.

The burden of prove is on those making the assertions.
 
Hellbinder[CE said:
]Nv30 goes beyond DX9 spec... way beyond.. Developer uses CG on Nvidia based development platform.. Writes nifty shader routines that looks and runs great on Nv30.. developer tests other Dx9 complient cards ATi, SIS, S3, 3Dlabs... Uh oh... Those cars do run the shader but have a lot harder time doing it... they have to multipass etc.. Yet.. they ARE THE CURRENT STANDARD. Benchamarks demos follow.. Comments in forums, finger updates, interviews Etc are made..

First of all, it seems like you're bashing nVidia for producing an advanced piece of technology. Very silly.

Secondly, there is currently no auto-multipass implemented in Cg, so if a program compiles to too many instructions, it will simply fail to run.
 
all i can say guys is sure I may be wrong. but my gut tells me that predatorial hawdware comanies do nothing with the true intent of helping out their rivals.

Time will tell.

First of all, it seems like you're bashing nVidia for producing an advanced piece of technology. Very silly.

Secondly, there is currently no auto-multipass implemented in Cg, so if a program compiles to too many instructions, it will simply fail to run

No Not bashing after a fassion. This is not the place to get into it about this. The intent of API standards is so that hardware companies follow then so there is a Set standard for each generation.

Multipass. I was simply trying to show an example of the kinds of things that will end up happening.
 
If you cannot produce a single example of how the public Cg language (which you can grab the language spec off the web) goes way beyond DX9 HLSL such that it would run like crap on ATI but great on NV30, then I expect you to retract these assertions. As far as I know, DX9 HLSL can be used to write arbitrary length shaders as well, includes loop constructs, and everything Cg does, WHICH WOULD BE NATURAL SINCE THEY ARE BASED ON THE C-LANGUAGE

The point I am making is not about the language itself. But the Language paired with the hardware..... really never mind...

I see that this is a useless endeavor on my part here. There are a few of you out there that can see the danger and do understand what im getting at. The rest of you never will. To you I am just another disgruntal Anti Nvidia type..

All I can say is wait and see what developes ....for now.....I digress..
 
Writes nifty shader routines that looks and runs great on Nv30.. developer tests other Dx9 complient cards ATi, SIS, S3, 3Dlabs... Uh oh... Those cars do run the shader but have a lot harder time doing it... they have to multipass etc.. Yet.. they ARE THE CURRENT STANDARD. Benchamarks demos follow..

First -- what is wrong with a benchmark that actually indicates superior hardware? I thought that was the point?

Second -- so NV30 users get a nifty little special effect, or shader optimization, that R300 users don't. That's just the nature of the game. I'm sure if the roles were reversed, many of you would be arguing the same thing. In fact, I'm sure most of you have, with regard to the R200 -- John Carmack has a nifty surface shading model that runs in 1 pass on R200, but needs to be multi-passed on NV20/NV25.

Developers aren't dumb enough to require hardware that has an installed base that can be counted in the dozens. If they want to optimize/improve for better hardware, there are tools that will happily allow them to do just that. However, there will always be fallbacks provided due to marketplace concerns.

NVIDIA isn't the only for-profit company around. All of your game development houses are also in the business to make a profit, and that's not going to happen if they require features that 1 in 10,000 users has.
 
Joe DeFuria said:
Micorosoft doesn't sell 3D Hardware. It does not have a direct interest in seeing one IHVs hardware succeed over another.

MS does sell 3D hardware. It's called the XBOX. Nvidia makes the chips for XBOX.
 
RussSchultz said:
Lets hear some good TECHNICAL arguments as to how Cg is somehow only good for NVIDIA hardware, and is a detriment to others.

Regarding hardware: If ATI, Matrox, 3Dlabs, Sis etc want their hardware optimized they can just built their own profiles. Right now they would rather not help nVidia promote Cg but if Cg picks up amongst delevopers as an important shader tool it would be really, really stupid of them not to support their own products [via profiles].

And they will because if the market accepts Cg, the market will also drive the other hardware vendors to support Cg either with profiles or their own implementation of the compiler. Politics aside, it's really that simple.

But on the other hand: if the market doesn't accept Cg, well, then all this paranoia is really all in vein...

Regarding the use of Cg over DX9 HLSL, OpenGL 2.0 etc., I can see one big advantage for developers (although fanboys won't benefit from it): You learn one language to write shaders on all hardware and all API's that support vertex and frament/pixel shaders. When DX10 HLSL and OpenGL 2.5 comes out you built upon all ready known blocks of code and just extent it. (and when R400 and NV40 comes out you built upon allready known profiles and extent them).

I cannot think of a better way to speed up the development and introduction of shaders in games. You can easily write the effects you want and "export" to multiple profiles thus supporting a large range of hardware. This might even mean that we'll see much more support for PS 1.4!
 
Hellbinder[CE said:
]It stuns me that so many of you cant see this for what it is...

Nv30 goes beyond DX9 spec... way beyond.. ...How is this fair, or good for the rest of the industry??? Putting ALL other hardware makers at a disadvantage..

Forget Cg for a second.

Is it good for the industry when IHVs add new features to their chips? Or should IHVs limit the features on their chips to whatever the current standard is, as not to give another chip a disadvantage?
 
Just what are the negative aspecst of having a second choice in HLSL?
I was under the impression that choices were good. And what's the point of bashing it? Or doubting it? Or whatevering it? If it's good, it will be embraced by developers and we all get better games. If it's bad, and everyone hates it (which isn't the case), then NVIDIA wasted their own money.

WTF is the problem?
 
Here's a hypothetical:

1. nvidia opens Cg, but retains full copyright.
2. Matrox, ATI, SiS, etc. develop Cg compilers to join in with nvidia.
3. Games developers, encouraged by the wide support, start making heavy use of Cg.
4. 2 years down the line, just as the games start to ship in bulk, nvidia exercise their copyright and say 'Right, now it's being used, pay us a licence fee of $50 for every chip you've shipped that can use Cg applications, or stop using Cg'. Boom - every game only runs on nvidia hardware optimally.

This would then have to be challenged as antitrust in court - and look what happens when you do that...


I'm not saying this will happen - I have hopes that nvidia aren't in fact that cynical / predatory, and not knowing the legal complexities I don't even know if this would be possible (there is the issue of the 'de facto standard' argument, that prevents the closure or restrictive licensing of a spec previously declared open if it has become the standard), but given what I know of the Java debacle and the recent discussions on 'All our algorithms are belong to Microsoft'....


On Russ' original request: I don't see any particular technical reasons that Cg gives anyone an advantage. It's either 1) all political and about controlling the market or 2) genuine altruism from nvidia to forward the 3D market.

It really could be the latter, and nobody should discount it. SGI proved with OpenGL that it is in the interests of the industry in developing an open standard.

But I'd also be wary of discounting the former - yet.
 
Back
Top