How Cg favors NVIDIA products (at the expense of others)

Deflection said:
This is what I was getting at. SIS, ATi all the rest FOLLOWED the OpenGL 2.0 and DX9 specs just as every card from every company has going all the way back several years. However this has all changed now. the Nv30 went far beyond the DX9 spec. Thus you can Write a shader routine that runns FASTER on the Nv30 than the R300 even though the game says DX9 on the box..

That paragraph is just asinine. Why shouldn't developers have a chance to do so if Nvidia provides it to them?

CG Language is open-sourced.

1) To me it seems clear that the NVIDIA CG COMPILER in the future will be able to optimize to an NVIDIA card. Great, faster game for Nvidia!

2) If developers like coding in CG. What is to prevent ATI from writing their own optimized CG compiler that may link ATI specific libraries or generate optimized towards ATI routines? Great, faster game for ATI!

3) Why shouldn't a company be able to optimize to their card. There will still be a base CG shader code that a developer could write that will work on both platforms and will compile under both compilers. It is the developer's call to choose what to code. In fact the developer could presumably include both optimized versions or none.

4) (Possible reason for CG) Could NVIDIA be promoting CG so that they can create an optimized compiler? Something that they could not do with DX or GL HLSL's? (I.E. there will be 1 and only 1 compiler that Microsoft will make for DX HLSL. MS will not allow other companies to create a compiler to optimize.)

Again the developer will decide to support these optimized compilers if they want to. What is wrong with that? Do you want your gaming experience limited to the lowest common denominator (Which unfortunately the Developers will choose most of the time anyway)?

Hi deflection

There is nothing wrong with any company trying to make money. In fact some good side of Cg is push the technology and counterbalance the M$ position. In the long term I would rather see a non proprietary standard HLSL and basic tools used by many, many different hardware and software companies (SIS, S3, ATI, IMG, Intel, AMD, Nvidia, M$, linux, software tools, etc..).

My guess the lowest common denominator is determined by the hardware/software installed base. My supposition is that a common open standard shared by the many companies above could leverage the competition and the fast deployment of a large and effective installed base and enhance the consumer experience with his/her application.

In my viewpoint the quality of what I will experience will depend more on this installed base than on what I can buy.

Resuming, I will be happy when many people are happy :)

To Basic, thanks again ;)
 
Sharkfood said:
Unless now you are going to tell me NVIDIA is certifying it's back-end compiler will produce R200/R300 working code...
If you're targeting DirectX, then yes, it will.
OpenGL is another story as there's still no common ground for shaders.
 
Sharkfood:

if you assume that the output is Output or vertout then the shader code compiles to:

dp4 r0.x, c0, v0
dp4 r0.y, c1, v0
dp4 r0.z, c2, v0
dp4 r0.w, c3, v0
mov oPos, r0

And I see no problem with compiling to DX9 code if there is a profile in place to produce DX9 assembly. The only time you can support more than the profile allows is by overriding functions within the profile or by creating a new profile to compile to.
 
Xmas said:
If you're targeting DirectX, then yes, it will.
OpenGL is another story as there's still no common ground for shaders.

According to the information we have now, Cg is designed to output for standard pixel/vertex shader extensions (i.e. ARB extensions). Unfortunately, I don't believe those exist right now. We'll see how it develops in the future...

Update:
I checked the OpenGL extension registry, and it appears that nVidia's NV_vertex_program extension has made it through the ARB:

IP Status

NVIDIA claims to own intellectual property related to this extension, and has signed an ARB Contributor License agreement licensing this intellectual property.

Microsoft claims to own intellectual property related to this extension.

I believe that what this means is that nVidia will allow any company to use this extension through OpenGL, but Microsoft still appears to have issues.

The extension was approved 6/19/02.

It is not currently supported in nVidia's drivers (Not in version 30.30), but I'm sure that will change soon.

If nVidia fails to include the ARB extension in both the final release of Cg and nVidia's next driver set, I think we all have the right to be very disappointed (...and I don't generally expect to be disappointed by nVidia :).
 
So I take it CG is partially Opensource? Right?

I also take it that CG is copyrighted.

Now can ATI, Matrox or who ever take that copyright code and change it and use it as they see fit how ever they please to make their cards perform the best or are they limited to only changing the back end only?

Just simple questions which would help me understand better the position or power Nvidia has over the CG language. Can ATI or Matrox change CG itself?
 
noko said:
Now can ATI, Matrox or who ever take that copyright code and change it and use it as they see fit how ever they please to make their cards perform the best or are they limited to only changing the back end only?

From what nVidia is saying right now, the front-end is completely open-source, meaning that all of the language parsing and stuff like that can be modified by anybody however they see fit.

The backend is proprietary, but nVidia will be providing help for others to write their own proprietary backends.

In other words, the language is supposed to be standard. It's up to each hardware developer to write the code to compile to their specific hardware.
 
As it stands now CG is beneficial to Nvidia due to the fact that the back end was optimize for Nvidia designed chips. This has been said over and over again throughout this thread.

Since others havn't had the opportunity to code the so called back end and experiment with the new language CG isn't set up for optimum output for the other GPU/VPU designs as in PS 1.4 doing 6 textures in one pass.
If other companies don't support CG then CG will never be optimized to support those companies, making the language beneficial only to Nvidia and those that optimize a compiler around their hardware. Right?

So is CG public domain??? If I can do what ever I want to it then it has to be public domain right? what limitations does Nvida place on their code? and will they have a legal right to enforce stricker standards in the future? I guess someone has to really read the fine print.
 
Sharkfood said:
Also, pretty wrong. I doubt your example will compile AT ALL. NVidia's current generic backend could easily produce DirectX8 vertex shader code for this or ARB_vertex_program. The translation would look something like (non optimal, by hand)

Everyone, Democoder included, seemed to miss the point of my example.

Developer A downloads the Cg kit and bangs out some code. There is no compilation path for anything outside of an NVxx chipset at this time. There is no ATI compiler back end, no P10, nada.

No, you seemed to miss the point. You post a Cg program that is a bog-standard transform, and then claim it won't work on other platforms, when in fact, YOU ARE WRONG. Cg's current version includes a profile for generic DirectX8 output, which means your example would compile and run on the R300 and R200.

In fact, your example would compile and run on a VOODOO1 under DirectX8 with software vertex processing.


Try coming up with some facts next time.
 
noko said:
Since others havn't had the opportunity to code the so called back end and experiment with the new language CG isn't set up for optimum output for the other GPU/VPU designs as in PS 1.4 doing 6 textures in one pass.
If other companies don't support CG then CG will never be optimized to support those companies, making the language beneficial only to Nvidia and those that optimize a compiler around their hardware. Right?

There is no auto-multipass in Cg as has been mentioned over and over. If you code something that won't work on PS1.4, it won't work on a GF4 or GF3 either.

Cg's real target is DX9/OGL2.0 level hardware.

Cg's compiler will spit out generic DX9/DX8 code if it detects no specific back-end. It will just use a generic one.

Moreover, Cg is going to be identical to DirectX9's built in HLSL. Therefore, if your are an IHV and you DON'T PRODUCE A BACKEND FOR DX9 HLSL, YOUR CARD WILL DEFAULT TO GENERIC DX9 HLSL COMPILER IN THE D3DX library.

As I have said 100 times already, Cg will just be an NVidia implementation of the language that will be in DirectX9, and NVidia's open-source compiler is pluggable and can generate code for multiple APIs (OpenGL), multiple profiles of those APIs (PS1.1, 1.2, 1.3, 1.4, ARB_vertex vs NV_vertex, etc)

NVidia has already did this in the past. NVidia ships an ASSEMBLER for DX8 vertex and pixel shaders called NVasm and NVparse (not technically an assembler, but similar). All these tools do is the same thing as the BUILTIN DIRECTX tools, except they are somewhat nicer to use, and NVidia's tool can take DirectX PIXEL SHADERS and compile them into OpenGL FRAGMENT SHADERS, likewise, it takes DX8 VERTEX SHADERS and turns them into OpenGL VERTEX PROGRAMS.

In other words, NVidia took MICROSOFT's LANGUAGE, added a few extensions for OpenGL, and increased the command line tool functionality so that you could generate OpenGL code from DirectX shaders! They made DX8 shaders portable to OpenGL.


This thread is freakin pissing me off, because 4 months from now, no one will even be talking about Cg as a language. They will be talking about Cg as a tool to compile standard DirectX9 HLSL programs for OpenGL. It's also freakin pissing me off because there are a bunch of know nothings like Doomtrooper and HellBinder making comments about the development process that are outright wrong. Sharkfood is a runner up for posting a Cg "shader" (that does nothing) and claiming that it isn't runnable on other hardware when anyone can see it instantly (because of the simplicity) that it would run and run optimally on ANY hardware.


Can we start a new Forum on Beyond3D that only allows a select group of people to post? :)
 
DemoCoder said:
Moreover, Cg is going to be identical to DirectX9's built in HLSL.

It will? I didn't read the discussion about licensing that way...has this changed, or did I misread? My understanding is that it was identical to the specification of the DX9 HLSL at some point and that nVidia made a vague market-speak assurance (this isn't a criticism, but my candid description of the phrasing) that they would attempt to maintain adherence to that standard in the future. My understanding is also that it is nVidia that will specify this adherence and no one else. Really, if my understanding is in error please provide the info, so I can stop worrying.

As I have said 100 times already, Cg will just be an NVidia implementation of the language that will be in DirectX9,

I didn't see this "100 times" ;) , though I do think I could have missed a few in your responses to Doom and Hell. Did you provide the info there supporting this?

In other words, NVidia took MICROSOFT's LANGUAGE, added a few extensions for OpenGL, and increased the command line tool functionality so that you could generate OpenGL code from DirectX shaders! They made DX8 shaders portable to OpenGL.

If all your statements are true, that is terrific news. Could you please provide links to the info that shows it is exactly as you describe, as it would be a solid answer to the questions I posed before in that post I can't get anyone to quote and reply to point by point. ;)
 
DemoCoder said:
noko said:
also freakin pissing me off because there are a bunch of know nothings like Doomtrooper and HellBinder making comments about the development process that are outright wrong. Sharkfood is a runner up for posting a Cg "shader" (that does nothing) and claiming that it isn't runnable on other hardware when anyone can see it instantly (because of the simplicity) that it would run and run optimally on ANY hardware.


Can we start a new Forum on Beyond3D that only allows a select group of people to post? :)

Sure here ya go...

http://www.cgshaders.org/forums/ ....there's your forum where you , Derek Smart and other self proclaimed genious's of the industry can sit down and talk code.

I've got some words for you to ponder..' ATI WILL NOT SUPPORT CG 'so you can forget this 'profile BS' as ATI is not going down that road...maybe after that is driven into your skull twenty times you will see only Nvidia will be using this ;)

I thought rendermonkey would have made this obvious...
I wish Humus was around.
 
I've got some words for you to ponder..' ATI WILL NOT SUPPORT CG 'so you can forget this 'profile BS' as ATI is not going down that road...maybe after that is driven into your skull twenty times you will see only Nvidia will be using this.

That's OK....When you consider nVidia's market presence, coupled with their awesome developer relations, I like their chances to succeed in this arena.

...and should this thing get off the ground, and games take advantage of this capability, I'm sure you'll be even more pissed off because you would feel like somebody was cheating you out of something you can't run.

BTW, why do you wish Humus was around?
 
I will say this again for the last time, Direct X 9 HLSL is coming..something ATI along with the other members have developed along with OGL 2.0 HLSL. To aid in peforming shading effects ATI developed Rendermonkey to 'PLUGIN' into those TWO languages.
ATI card owners will not be missing anything...

Do you really think the next EA game is gonna say.."Requires Nvidia CG 3D Accelerator" vs. "Requires DX9 3D Accelerator"...

I don't think so :p
 
rm1.jpg

rm2.jpg

rm3.jpg
 
Doomtrooper said:
Do you really think the next EA game is gonna say.."Requires Nvidia CG 3D Accelerator" vs. "Requires DX9 3D Accelerator"...

I don't think so :p

Ugh... CG is a language. DX9 is a API. Apples Vs Oranges.
 
What I think is kind of funny is that it looks like Cg and Rendermonkey could actually play together nicely.
 
Doomtrooper said:
Do you really think the next EA game is gonna say.."Requires Nvidia CG 3D Accelerator" vs. "Requires DX9 3D Accelerator"...

I don't think so :p

And if game developers start using Cg, and the code generated is less than optimal for ATI's hardware, who do you think is going to get blamed by most people? If Cg gets wide acceptance, you can bet that ATI will pretty much be forced to provide their own support.
 
Chalnoth they are going to code a game that the R300 couldn't handle :LOL: ....and we still have not even got more than a handfull of DX8 games on the market now with DX9 releasing in a couple months...
So you think CG HLSL will be more popular than Microsofts offical DX9 HLSL..I don't.
 
Back
Top