How Cg favors NVIDIA products (at the expense of others)

RussSchultz

Professional Malcontent
Veteran
Ok, put up or shut up. I'm tired of hearing the incessant bleating of "Cg is optimized for NVIDIA hardware" without any proof than little smily faces will eyes that roll upward.

Lets hear some good TECHNICAL arguments as to how Cg is somehow only good for NVIDIA hardware, and is a detriment to others.

Moderators, please use a heavy hand in this thread and immediately delete any posts that are off topic. I don't want this thread turned into NV30 vs. R300, NVNDIA vs. ATI, my penis vs. yours. I want to discuss the merits or de-merits of Cg as it relates to the field as a whole.

So, given that: concisely outline how Cg favors NVIDIA products while putting other products at a disadvantage.
 
Facts

Four simple arguements

1) Who owns CG ( a competing graphics company or the operating system games will run on)..who has the final say on what is supported.

A: Nvidia

2) Who invented a 'C' compiler that is supposed to be neutral, platform independent yet doesn't support Pixel Shader 1.4 a DX 8.1 feature.

A:Nvidia

3) Who along with MS is blocking the progression of OGL 2.0

A: Nvidia

Q: Why

A: To further force CG down everyones throat, what other reason is there ??

4) CG can expose Nvidia opimizations (more powerful register combiners) and since only a small portion of the back end is open source..who knows what else...driver reordering
 
Doomtropper, how are these technical arguments? are you 100% certain that CG will make it impossible to use pixel shaders 1.4?

2) Who invented a 'C' compiler that is supposed to be neutral, platform independent yet doesn't support Pixel Shader 1.4 a DX 8.1 feature.

...and who will release another version of CG that is DX9 compliant and will also include pixel shaders 1.4?

Nvidia.
 
Wait a minute, CG is platform independent....why must we wait for Nvidia to catch up to support this supposed Open Source 'C' language. PS 1.4 is a part of DX8 and should be included if they were trying to keep this platform independent image.
Nvidia is looking after Nvidia.
 
Wait a minute, CG is platform independent....why must we wait for Nvidia to catch up to support this supposed Open Source 'C' language.

You mean wait to support DX9? Of course geforce 3 is only Dx8.0 compliant so they supported that first, but it's not like they are flat out refusing to support Ps 1.4 at all, considering it will be supported in the next revision of CG with DX9 support.

Still, this isn't a technical answer. I think what he wanted to see is something more along the lines of, "it's not possible to change the supported features of CG through changes in the open source compiler..."

PS 1.4 is a part of DX8 and should be included if they were trying to keep this platform independent image.

Ps 1.4 is part of DX 8.1 not 8.0.

Nvidia is looking after Nvidia.

Of course they are looking after themselves, DUH, as if ATI wasn't looking after their own interest when they pushed to have DX8.1? right.
 
why must we wait for Nvidia to catch up to support this supposed Open Source 'C' language. PS 1.4 is a part of DX8 and should be included if they were trying to keep this platform independent image.

Because obviously there is an infinite amount of manpower in Santa Clara, and everything can be implemented immediately, so anything that's not present is a deliberate attempt to stall progress.

There's quite a bit of work that needs to be put into Cg still, and there are more NVIDIA graphics cards at NVIDIA than ATI cards, for debugging purposes. Developers are more interested in PS1.1 DX8 hardware than PS1.4 DX8 hardware, so that's where the DX8-targetted effort is going initially.
 
Ps 1.4 is part of DX 8.1 not 8.0.

Incorrect PS 1.1 is now part of DX 8.1..DX 8.1 compliance requires Pixel Shader 1.1-1.4 or anything in between..the backwards compatability is something Microsoft is famous for.
 
Doomtrooper, Nvidia didn't even support their own OpenGL extensions for fragment shaders in the first version! Sheesh! Give them time. The first version wasn't optimized for anything except DX8.0.


None of your arguments are technical. By technical argument, we meant something like "This feature of the Cg programming language can only be implemented on NV30 optimally, and not on any other DX9 compliant hardware."

I'm waiting.
 
The thing I find most ironic is the fact that people don't seem to have a problem with a Company like Microsoft dictating things...a Company that has already been ruled a Monopoly...

Does anybody get that warm/fuzzy kind of feeling that the issues surrounding the OpenGL patents is just the beginning?

OK, not to totally segway from the topic @ hand...My point is that, although nVidia is obviously intested in making $$, the Company is really full of people who have as much interest in moving things forward as you or I. Whether or not you want to accept this reality or not is a different story.

Having said that, the positives in/around Cg far outweigh the negatives, IMHO. Let's face it...nVidia has a lot of pull with the development community. If this is the thing that finally accelerates the time in which new technology makes it into games...I'm all for it. Quite frankly, it's just really sad how underutilized current 3D hardware is being used by todays games.
 
Is CG open source or not?

Is it copyrighted?

If it is not open source and is copyrighted then how can anyone else as in ATI, Matrox etc. optimize it for their hardware? => It is obviously optimized on Nvidia hardware.

Did the graphic chip makers sit together and design a new language based on standards or was it based on Nvidia Hardware?

This is all so obvious to me I am not even sure why anyone whould question that CG is ment to improve Nvidia more then their competitors. Everything else is politics, fog and mirrors.
 
It is patently obvious to anyone with an even introductory knowledge of computer science that Cg is a pretty generic shading language. It has variables, it has loops, it has conditions, and everything else you've seen in 1001 other languages.

Cg is NOT assembly language. It is abstract. There is no specific mapping in the language to NV30 hardware, anymore than RenderMan shaders map to the NV30. The engineers at NVidia probably started out designing the language the way anyone designs a programming language: they need variables, datatypes, loops, conditionals, procedures, etc. Then, they work backwards and start REMOVING things that don't work.

You see folks, even the most simple computer language is usually Turing-complete, which means it is identical in power to all other languages. LISP, whose language spec can fit on a single sheet of paper, is currently more powerful than Cg and RenderMan. In all likely hood, Cg was designed by looking at RenderMan and slimming it down so that it would work on DX9 class hardware, not the otherway around (starting with DX9 and adding more until it exceeded DX9 class hardware)


If you have evidence to the contrary about Cg's NV30 specific *language features*, put up or shutup.
 
The thing I find most ironic is the fact that people don't seem to have a problem with a Company like Microsoft dictating things...a Company that has already been ruled a Monopoly...

Micorosoft doesn't sell 3D Hardware. It does not have a direct interest in seeing one IHVs hardware succeed over another.

Do you have a problem with how Microsoft has evolved Direct3D? They seem to have done a pretty damn good job to me! Seems to me Direct3D has gone from the bastard child, to actually superceding OpenGL (with DX9), in a relatively short period of time.

OK, not to totally segway from the topic @ hand...My point is that, although nVidia is obviously intested in making $$, the Company is really full of people who have as much interest in moving things forward as you or I

Same with Microsoft. Microsoft wants their OS running the software. And the kicker is, MS doesn't care if it's nVidia's GPU, ATI's, Matrox....

Quite frankly, it's just really sad how underutilized current 3D hardware is being used by todays games.

Yes, but that's not the fault of the API as far as I can tell, it's simply a consequence of developers targting an installed base of hardware.
 
noko said:
Is CG open source or not?
Yes, several significant parts of it actually are. Others unfortunatelly aren't, yet...

noko said:
Is it copyrighted?
Yes. Now I have a question. Is that neccessarily a bad thing? I know a very great number of usefull tools, programs, etc. that are copyrighted and yet are still quite usefull...

noko said:
If it is not open source and is copyrighted then how can anyone else as in ATI, Matrox etc. optimize it for their hardware? => It is obviously optimized on Nvidia hardware.
There is no proof for any significant optimization for Nvidia hardware specifically, beyond not currently supporting PS 1.4. Cg is still in beta though and that has been announced to change in future versions. Of course it is reasonable to assume that Cg's current compiler is probably trying to optimize for Nvidia hardware, but with the documentation and open source of Cg available, it would probably be possible for other vendor's to create their own optimizations, if they choose to invest the resources into it. Someone more experienced with compilers would need to comment on this though...

noko said:
Did the graphic chip makers sit together and design a new language based on standards or was it based on Nvidia Hardware?
I must have missed the moment when suddenly the only acceptable way to actually release a product relating to real-time-3d, became going through a committee of come sort. <cough>
Have the greatest technological innovations of the past (not intending to put Cg amongst their ranks) all been created by commissions, committees or the like? No.

noko said:
This is all so obvious to me I am not even sure why anyone whould question that CG is ment to improve Nvidia more then their competitors. Everything else is politics, fog and mirrors.
There we have the popular "its so obvious, why can't everyone else see the underlying truth" statement again. Please accept that other might not be as bright as you are... <louder cough>
 
So far, I haven't seen a single reason why Cg is being developed at the expense of any other company.

As far as lack of support for PS 1.4:
nVidia has no graphics cards that currently support PS 1.4, and why compile to 1.4 on an NV30? If ATI wants it, let them design their own modification of the compiler. After all, nVidia has open-sourced everything now...

There are only two things that nVidia could do to make it other manufacturers' hardware less effective with it:

1. Prevent the exposure of new features that other IHV's support and nVidia doesn't. I don't know how nVidia can do this, personally, as the whole thing's been open-sourced. And on an ATI vs. nVidia standpoint, there's nothing that the R300 supports that the NV30 doesn't, according to what we currently know about the shaders of the two products.

2. Somehow design the language to be more-optimizable on nVidia hardware. This seems very farfetched to me, as the whole purpose of an HLSL is for it to separate the developer from the hardware...which limits the chances for specific-hardware optimization on the language specification level. Still, I would be very interested in some specific examples of how this could be done, if anybody can come up with some.

Anyway, all that we need for Cg to come into widespread usage is for some other hardware vendor to create their own compiler that compiles to their own shader profiles. Still, I really think that the only way for this to happen, realistically speaking, is for Cg to converge with DX's HLSL and OpenGL's HLSL. This is, I think the greatest promise for Cg...that it could possibly merge GL and DX shader programs.
 
DemoCoder said:
It is patently obvious to anyone with an even introductory knowledge of computer science that Cg is a pretty generic shading language. It has variables, it has loops, it has conditions, and everything else you've seen in 1001 other languages.

Cg is NOT assembly language. It is abstract. There is no specific mapping in the language to NV30 hardware, anymore than RenderMan shaders map to the NV30. The engineers at NVidia probably started out designing the language the way anyone designs a programming language: they need variables, datatypes, loops, conditionals, procedures, etc. Then, they work backwards and start REMOVING things that don't work.

You see folks, even the most simple computer language is usually Turing-complete, which means it is identical in power to all other languages. LISP, whose language spec can fit on a single sheet of paper, is currently more powerful than Cg and RenderMan. In all likely hood, Cg was designed by looking at RenderMan and slimming it down so that it would work on DX9 class hardware, not the otherway around (starting with DX9 and adding more until it exceeded DX9 class hardware)


If you have evidence to the contrary about Cg's NV30 specific *language features*, put up or shutup.


I agree completely. If any optimisations are made for specific hardware, it would when its transformed into HW asm... and it seems ATi could write their own Cg compiler that optimizes for R300.
 
Let me make my viewpoint clear.

It's not that Cg in its current form is impossible or even unacceptably unfair to other IHVs in the sense that they can't write their own compilers.

The concern is....WHY would other companies do this when there are IHV "independent" HLSLs to write compilers for? What could CG offer ATI or Matrox that DX9 HLSL or OpenGL 2 won't?

I've said it in other threads: I don't have any problem with what nVidia did...I just don't see it as being an industry accepted standard...unless nVidia reliquishes control over the specification. And if it has little chance of being accepted as an industry standard, then I just assume the industry is not bothered with it.

I would have much rather seen nVidia release something like: "Here's the current beta specification for DX9 HLSL, and here's the other CG compenents, compiler, etc..." That has a better chance at dragging the industry through a more complicated "Shading Language War" than needs to be.
 
DemoCoder said:
It is patently obvious to anyone with an even introductory knowledge of computer science that Cg is a pretty generic shading language. It has variables, it has loops, it has conditions, and everything else you've seen in 1001 other languages.

Cg is NOT assembly language. It is abstract. There is no specific mapping in the language to NV30 hardware, anymore than RenderMan shaders map to the NV30. The engineers at NVidia probably started out designing the language the way anyone designs a programming language: they need variables, datatypes, loops, conditionals, procedures, etc. Then, they work backwards and start REMOVING things that don't work.

You see folks, even the most simple computer language is usually Turing-complete, which means it is identical in power to all other languages. LISP, whose language spec can fit on a single sheet of paper, is currently more powerful than Cg and RenderMan. In all likely hood, Cg was designed by looking at RenderMan and slimming it down so that it would work on DX9 class hardware, not the otherway around (starting with DX9 and adding more until it exceeded DX9 class hardware)


If you have evidence to the contrary about Cg's NV30 specific *language features*, put up or shutup.

Show us the source code or ditto :p
 
I think asking for a proof of how Cg will benefit nVidia at the expense of others is a fallacy. We have no idea where other vendors will advance technologically...the problem isn't Cg as it is now, which seems to be identical to DX 9 HLSL by all reports, but the lack of assurance that any future changes, or lack of changes, will solely be determined by a party that has a specific interest in including changes that benefit them and excluding changes that benefit others.

Someone brings up Microsoft...

Well, hypothetically, could the current situation be considered similar to offering a free and feature rich application generally adhering to an accepted standard for content delivery?

Follow me for some a "What if?" session.

What if they proceed by then specifying specific alterations, as time progressed, in such a way that it adds functionality as they see fit?
Hmm...what if a suite of development tools tuned to producing output for the particular specification in question evolved, but only along the path of the altered vendor-centric specification, and became popular among developers?
What would then happen to the possibility of someone revising and enhancing the specification to suite the interests of the consumer or the vast majority of vendors, even though it might hurt one particular vendor's competitiveness by allowing competitors an opportunity to excel?
Well, it seems like this initiative to evolve the specification couldn't succeed, if the vendor that might lose a competitive edge is the one who controls this popular altered specification.
That vendor would then have a rather heavy influence on the success or failure of any initiative to modify the specification to an evolving standard or new direction that someone else might come up with and the rest of the industry might wish to follow.
It seems they could then leverage this influence for financial gain and/or they could delay the new specification, that might benefit the consumer/other competitors, until such a time as this one vendor has desired tools in place to ensure they profit from the change...assuming they don't prefer to offer their own version of a change tuned to their particular goals and toolset, whenever that might be.


My question is why risk the above? What is Cg giving us that DX9 HLSL isn't? And OpenGL 2.0/HLSL? I don't see the enhanced support for their enhanced vertex shader lengths and instructions, as a good HLSL shouldn't be that close to the metal as to exclude benefiting from the likely NV30 superiority to basic Pixel/Vertex shader 2.0...the specifications that have been listed are for the "assembly" language, not the HLSL.

Runtime compiling?
Hmmm...won't DX9 offer that? I mean, if DX9 offers that at the driver/API level, what exactly is the point of Cg? It seems like there is no point as far as DX goes, unless it is to coopt the initiative away from DX 9 HLSL by earlier release...or perhaps DX9 HLSL doesn't offer runtime compiling?

OpenGL? Hmm...what about OpenGL 2.0? The capabilities seem similar...is it that the OpenGL 2.0/HLSL won't support runtime compiled?

Maybe downwards compatibility as has been mentioned? Making it runtime allows abstraction that will allow effects to be designed to a high target, and be supported in some form by cards even if they can't handle the full effects load. This does assume that it is the only run time solution however...if it is, perhaps this can be a clear benefit.

Or, perhaps, is it the appeal of one HLSL for both DirectX and OpenGL? This does also seem to be a useful goal, but this seems to also magnify the negative possibilities noted above.

What if this altered specification was adopted and shipped on the next most popular system for content delivery?
Hmm...well, what a nice tool for this other popular system. Wouldn't it be too bad if the vendor tried to leverage into directing the evolution of this other system after its altered specification of content delivery became popular for it, and hence dependent upon the vendor? Would it be possible that in the meantime by assuring a wider adoptation (perhaps even monopoly) of its altered specification they would enjoy an even stronger position of leverage and control they could then use to suite the market to their profitability even at the expense of competition?



Note, the only reason it is not bad that Microsoft specifies DirectX is because they don't make any video hardware (give them time :-? ).

It just seems to me that the one flaw apparent in Cg is an overwhelming one for a rapidly growing and evolving industry.
 
How Cg favors NVIDIA products (at the expense of others)
I agree with Joe. The answer to the question is not technical but strategic and legal. The one who has the control has the market.

I am a consumer I would rather have an open standard, not Cg or RenderMonkey or M$ HSL.
 
Back
Top