How Cg favors NVIDIA products (at the expense of others)

Doomtrooper said:
http://cgshaders.org/forums/viewtopic.php?t=11&start=0&sid=c01c451a8694093316876520e722d610
Thanks for that link, DT. I gather you're one of the Guests? At least some of the arguments and posts remind me strongly of your style. Anyway.

I think the most "interesting" two postings in said thread are these two:

By Ilfirin:
cG Facts:

- Works on all cards
- Compiles to DX8 vertex/pixel shaders
- Compiles to DX9 vertex/pixel shaders
- Compiles to OGL 1.3 vertex/fragment programs
- Compiles to OGL 1.4 vertex/fragment programs
- Works at a higher level than API specific shaders; which means it has nothing to do with OGL2 or DX9 shading

nVidia said they will incorporate OGL2's language once the ARB starts approving some of it (all of OGL2 means nothing until it is approved by the ARB).

Why it has nothing to do with OGL2 or DX9 shading: it is another layer of abstraction. You can write your shaders in cG, just select the DX9 profile if you want DX9 shaders to be the output. If you want OGL2 shaders when they incorporate that into the compiler just sellect the OGL2 profile and compile.

Arguing about this is so stupid, it is all preference. If you want to continue writing everything in Assembly go for it, if you want to write everything in cG go for it - the output is the same shader file. If you want to play with your thumbs until the shader languages of DX9 and OGL2 come out go for it.

By Inane_Dork (of Redmond--conspiracy, conspiracy ;) ):
Why is it that only the guests have these major problems?

You can program in DX9 HLSL, to be sure (when it comes out). But Cg is a superset of that. Hence, if you're making an OpenGL game, or an OpenGL AND Direct3D game, Cg is a better option.

You can whine all you want about proprietary stuff, but I dare you to find a better alternative.

I, too, fail to see the problem behind the origins of Cg. It's a language, not the second coming of Christ being marshalled by Satan incarnate. It's a TOOL to be used by DEVELOPERS. This reminds me strongly of the old Borland-C vs/ Microsoft-C discussions. If developers can work better with one tool than with another, so? More power to them. If it doesn't do what you need it to do, drop it and use something else. Alternatives are good. Choices are good. It's not as if anybody were forced to use Cg, especially not with DX9's HLSL and RenderMonkey combined.

It's a tool, folks.

ta,
-Sascha.rb

P.S. Russ, I'm sorry to contribute to the further detour your well-intended thread has taken. But as nobody seems able to produce an example or argument regarding your original question, I figured so what. ;) -.rb
 
nggalai said:
P.S. Russ, I'm sorry to contribute to the further detour your well-intended thread has taken. But as nobody seems able to produce an example or argument regarding your original question, I figured so what. ;) -.rb

Ha. Of course, I'll provide one, expounding on one that Democoder brought up.

Swizzling is apparently a native operation on the NV30, but requires multiple instructions on the R300 (or so it has been claimed) so it will run slower on the R300. Is this NVIDIA manipulating the language to put R300 at a disadvantage, or just adding something to the language that is natural.

Barrel shifting would be another operation that might be added to the language (e.g. output = input<<5). It might be advantageous on one architecture than another (for example, the DSP my company uses can only shift one bit per clock cycle, it would take 24 clock cycles to shift 24 bits--unless you use fractional multiply, but I digress).

HOWEVER, most of these basic numeric operations are already included in Cg, so this isn't an issue, but it does present at least a point where if one company was planning on adding barrel shifting into their new chip, they'd be reliant on NVIDIA to add barrel shifting as a intrinsic and would tip their hand.

HOWEVER, just to contradict my own argument (fun to argue with yourself), this new operator could always be expressed with a function:

output = barrel_shift(input,5);

And the backend would generate whatever assembly is required. (I think you can even have functions with asm statements in them). This would allow the IHV to add the functionality to the language without tipping their hand to NVIDIA. Once it gained widespread development, it might be added to the language as another intrinsic function that is called via << instead of explicitly.

So, is Cg functionally complete so that any extra operations could be coded to provide any missing operators?

Actually, arguing with myself again: couldn't the "<<" operator be added in the profile? Or is this a parsing issue? Sigh, I really should read up on Cg and know what it can/can't do. :)

So, another question: am I mistaken, or does the R300 and NV30 have different mnemonics for their opcodes? I remember looking at some slides and seeing different names for the opcodes. Does this suggest they might also have different syntax, etc?
 
Thanks. I know my blood boils some times and I might sound like a know it all condescending asshole, but my 3D knowledge is far below that of others like Simon, Mfa, ERP, Fafalada, et al. However, I know enough to spot a phony or a liar.

I see so now im a phony or a liar... right... You took the gloves off?? well.. now so am I? Being that NO ONE in charge here scolded you for getting into name calling and insulting people.. well see if the same rules apply to me...I did DEMONSTRATE my position with EXAMPLE. I DID propose a THEORY and then BACK it up.. The problem is clearly most of you are TO DAMN BLIND to see the NOSE ON YOUR FACE!!!! hows that..

Let me try it this way..

IT DOESNT F*ING MATTER IF CG IS OPEN SOURCED. IT STILL HAS THE NVIDIA LABEL... GET IT??...this is not Conspiracy it is THE WAY BUSINESS WORKS.. GO BACK TO SCHOOL AND TAKE A BUSINESS CLASS.....

Would Nvidia use Cg if it was developed by ATi??? NO clearly NOT. Why??? becuase every game that used Cg will now get a lable on it that says *Designed with ATI Cg* Nvidia is no more going to support something like that then fly to the moon. What you are going to get is this on many a box. *optomized for Nvidia cards with Nvidia's CG* or *Effects done by Nvidia's Cg*... It makes me ILL that you suppsedly *inteligent* people here cant see the forrest for teh trees... I mean PLEASE!!!!! Do you think you will EVER see a game with *optomized for Radeon with Nvidia CG*???? Come on people WAKE UP AND SMELL THE COFFEE!!!!!.....

This is why my ONLY point that Those here CANT SEEM TO GRASP. Is all industry standard API's, HLSL's, LANGUAGES etc..HAVE TO BE CONTROLED, DEVELOPED by INDEPENDANT parties with direct input from all. Just like the openGL ARB board..

If you think that for even one second that my exapmles above are just deranged imagination or Something that is not going to happen....Then you are a *fool*.
 
Implementing the shift as a profile-specific function is easy, since each profile is responsible for linking library functions. It would be no different than having different functions for fetching textures.

Creating an operator for barrel-shift would require changing the grammar, though, which would be a change to the language specification. Since the front-end is open-sourced, somebody (and preferably a 3rd party, rather than another IHV, just to avoid any possible arguments) could create a derivative Cg front-end that supports this operator; however, the derivative front-end couldn't be called Cg until NVIDIA decides to add it to the core Cg spec.

So, is Cg functionally complete so that any extra operations could be coded to provide any missing operators?

Pointers could get really ugly, since there wouldn't be any type-checking at all, but they can be done with profile-specific functions.
 
Hellbinder,

If you had even the beginning of the ability to modify the content of your posts to exclude such concepts as "If you don't see this you are a fool" and "no one here seems to be able to grasp", perhaps it would help.


As it stands you come across with posts full of emotions, insults, and swears and anything you may be trying to support along with it gets drowned.

You never replied, or modified your behavior, when I made this observation on Rage3d, but it has been a month or two I think and maybe I'll have better luck this time.
 
Re: Wishful thinking

demalion said:
I think it is a mistake to predicate a defense of Cg based on what you think the industry will have to do because it makes sense to you (and me). There is no reason to presume that developers won't stick to Cg even if it disadvantages or fails to expose the full featureset of other hardware (look at glide)...you give them too much credit as a collective group.

IIRC, Glide grew in popularity simply because there were no other decent options in the marketplace. Once D3D became a viable solution, I recall a mass migration away from Glide, and widespread industry agreement that proprietary tools were a Bad Thing. The developer community quickly rallied to the cry of "one API for all hardware is a Good Thing". It was a big deal. What percentage of 3D games in the V4/V5 days had native Glide support? Couldn't have been many. Glide had all but disappeared well before 3DFX did.

I foresee a similar reaction if the developer community perceives that Cg somehow benefits one vendor over another. Other viable HLSLs will be available, so there will be no reason for developers to stand by an HLSL implementation that doesn't provide the same level of support for all hardware.

All they will do is target Cg, and settle for however that falls out for other hardware vendors. They have no reason to care if a specific hardware part stumbles because of Cg implementation decision, since with Cg being the target, no one will buy that hardware....it doesn't affect their sales as they see it. Whether that is best for the direction of the 3d industry, you'll end up depending on one vendor, and one vendor alone, to determine, which, as I state, is undesirable.

This is a chicken/egg argument. Will games not sell because they run slow on certain hardware, or will hardware not sell because it runs certain games slowly? I argue the former. Ultimately, I believe developers target 3D games to hardware, not to languages or APIs. How it performs on the hardware is all that matters in the end, because that is all that matters to the consumer. You just can't release a game that runs 60 fps on Nvidia's latest hardware, and 20 fps on ATI's latest (similarly-powerful) hardware. The gaming community would crucify you.

Developers have hardware from all of the major players, and test on each configuration during development. I think that if Cg disadvantages one piece of hardware, this would be identified during development and corrected (i.e. through the use of a different code path using a different HLSL). And the developer wouldn't be very happy with Nvidia and Cg.

I maintain that Nvidia has to keep developers happy to ensure the success of Cg (developers are, after all, the target market), and to do this, they must be careful to not disadvantage other vendors' hardware. If Nvidia tried to pull a trick like that, developers would see through it instantly and move on to greener pastures.

P.S. Doh! I just realized how to access PMs, and saw your original message. Never used that feature of the site before.
 
RussSchultz said:
Swizzling is apparently a native operation on the NV30, but requires multiple instructions on the R300 (or so it has been claimed) so it will run slower on the R300.

Swizzling of what? The input/output channel components? Component swaps of intermediate data?
 
swizzling of the different components of the quartet. (Making RGBA to BGRA).

Why, I don't know, but hey...somebody mentioned it. :)
 
Hellbinder[CE said:
]Would Nvidia use Cg if it was developed by ATi??? NO clearly NOT. Why??? becuase every game that used Cg will now get a lable on it that says *Designed with ATI Cg* Nvidia is no more going to support something like that then fly to the moon. What you are going to get is this on many a box. *optomized for Nvidia cards with Nvidia's CG* or *Effects done by Nvidia's Cg*... It makes me ILL that you suppsedly *inteligent* people here cant see the forrest for teh trees... I mean PLEASE!!!!!

Wait a second here, this is a completely different argument. You are no longer claiming that Cg is in any way technically deficient or provides a technical advantage to Nvidia, you are simply stating that widepread adoption of Cg would be a marketing coup for Nvidia. I wholeheartedly agree! In fact, I pointed this out several posts ago, and I believe this is Nvidia's primary motivation.

My prediction: Cg will be successful because developers will like it and it is proven to provide no technical advantage to Nvidia (it supports all hardware in a fair and equitable manner). And Nvidia will benefit as game boxes include little buzz-phrases like "designed with Nvidia Cg".

You may think this is just as anti-competitive and "evil" as using Cg to gain a technical advantage. But it is a completely different argument.
 
RussSchultz said:
swizzling of the different components of the quartet. (Making RGBA to BGRA).

Heh, interesting. No, it doesn't take multiple instructions. I guess you can say it is native as well. What you heard was FUD.
 
hax said:
RussSchultz said:
swizzling of the different components of the quartet. (Making RGBA to BGRA).

Heh, interesting. No, it doesn't take multiple instructions. I guess you can say it is native as well. What you heard was FUD.

Unimportant, it was only an example of something that might be used.
 
SteveG said:
My prediction: Cg will be successful because developers will like it and it is proven to provide no technical advantage to Nvidia (it supports all hardware in a fair and equitable manner). And Nvidia will benefit as game boxes include little buzz-phrases like "designed with Nvidia Cg".
The real question is, will developers prefer Cg to other upcoming shading languages that are optimized for all platforms? Especially when those other shading languages enjoy broad industry support and promotion, rather than just that of a single company.

I think the answer will be no... and that's why Nvidia is considering merging Cg with these other languages. As I stated earlier, it will probably all come down to whether they are willing to give up getting their logos everywhere in order to advance the state of the industry.
 
Re: Wishful thinking

SteveG said:
demalion said:
I think it is a mistake to predicate a defense of Cg based on what you think the industry will have to do because it makes sense to you (and me). There is no reason to presume that developers won't stick to Cg even if it disadvantages or fails to expose the full featureset of other hardware (look at glide)...you give them too much credit as a collective group.

IIRC, Glide grew in popularity simply because there were no other decent options in the marketplace.

At the time of Cg introduction, the same can be said.

Once D3D became a viable solution, I recall a mass migration away from Glide, and widespread industry agreement that proprietary tools were a Bad Thing.

Well, unreal tournament was affected largely by being tied into an investment in glide usage prior, wasn't it? Also, glide didn't try to layer on top of OpenGL and Direct3D, so it wasn't a case of fully supporting glide could be done while still offering perhaps reduced functionality on all other hardware.

The developer community quickly rallied to the cry of "one API for all hardware is a Good Thing". It was a big deal.

Wasn't the quick rally because glide was completely unsupported on other hardware? My concern isn't complete lack of support, but reduced functionality that is "good enough" for developers, and only fully exposed performance/image quality-wise for those consumers who use nVidia hardware (now, and in the future). The seduction of being familiar with glide (Cg) will still be there, but not the concern from the developer's standpoint of losing sales. It could serve to lock nVidia into the 3dfx spot more effectively than 3dfx could manage...is that a good thing?

What percentage of 3D games in the V4/V5 days had native Glide support? Couldn't have been many. Glide had all but disappeared well before 3DFX did.

See above.

I foresee a similar reaction if the developer community perceives that Cg somehow benefits one vendor over another. Other viable HLSLs will be available, so there will be no reason for developers to stand by an HLSL implementation that doesn't provide the same level of support for all hardware.

Then what is the reason for Cg to exist now? See prior posts for the full context of this question. And I doubt developers, collectively, care if one vendor benefits over another, as long as their sales aren't hurt.

All they will do is target Cg, and settle for however that falls out for other hardware vendors. They have no reason to care if a specific hardware part stumbles because of Cg implementation decision, since with Cg being the target, no one will buy that hardware....it doesn't affect their sales as they see it. Whether that is best for the direction of the 3d industry, you'll end up depending on one vendor, and one vendor alone, to determine, which, as I state, is undesirable.

This is a chicken/egg argument. Will games not sell because they run slow on certain hardware, or will hardware not sell because it runs certain games slowly? I argue the former.

? The game will still run on all hardware, just look ugly. If users complain, they can just be told to upgrade to nVidia hardware. BTW...I'm not making this scenario up. :-/

Ultimately, I believe developers target 3D games to hardware, not to languages or APIs.

I think you confuse Carmack with the vast majority of developers.

How it performs on the hardware is all that matters in the end, because that is all that matters to the consumer. You just can't release a game that runs 60 fps on Nvidia's latest hardware, and 20 fps on ATI's latest (similarly-powerful) hardware. The gaming community would crucify you.

Laf...*sigh*, well, the fps doesn't have to be so black and white, it could be 60 fps and 20 fps to achieve a certain level of quality, and it is not ATi I'm so much concerned about in the near future, but from some future competitor with some exciting new focus in the future (or now that nVidia has done exactly that, is it ok for it not to be possible in future by locking down control over the API used for all cards and not just your own?). Btw, that laugh and sigh is because of the game forums I've been visiting recently, not a laugh and sigh at you.

Developers have hardware from all of the major players, and test on each configuration during development. I think that if Cg disadvantages one piece of hardware, this would be identified during development and corrected (i.e. through the use of a different code path using a different HLSL). And the developer wouldn't be very happy with Nvidia and Cg.

I don't think they would care. Again, I laugh and sigh based on game forums I've been visiting recently. I think they care much more about meeting release dates (again, collectively as a group). And the scenario I envision is more like "20 fps on nVidia, 60 fps on something else" that nVidia would have no interest in facilitating.

I maintain that Nvidia has to keep developers happy to ensure the success of Cg (developers are, after all, the target market), and to do this, they must be careful to not disadvantage other vendors' hardware. If Nvidia tried to pull a trick like that, developers would see through it instantly and move on to greener pastures.

How happy, and for how long? Developers might be happy with the "program once" feature of Cg, and I say that isn't a bad thing, but why would they care about all the other concerns I have? I know gamers only care if it works for them, and if Cg leads to a scenario where a vendor has to charge less for a product because a Cg targetted game doesn't perform better than certain nVidia hardware, even though a slight change to Cg or differing targetting by developers would allow it to, well, I'm pretty sure nVidia, and therefore developers, won't be supporting that tweak, and that most gamers will end up having nVidia hardware. The implication has been made that nVidia can't stop vendors from offering the tweak (I can believe this) and can't stop developers from adopting this (I don't believe this), and it would save me a lot of typing if that implication were substantiated. ;)

To be clear, I don't mind most gamers having nVidia hardware (well I do, because my hardware so happens to be more capable ;) ), I'm just concerned that this will remain true not because of competitiveness but because of the holds nVidia will have on the standard HLSL if Cg (as I understand it will be implemented) gains popularity. I do recognize that Cg will allow a situation where the capabilities of my card (PS 1.4) can transparently benefit, but that is because Cg aims above it, and Cg aims above it because nVidia hardware will be capable of that higher target in the near future. My concern is for when that last is not the case.
 
Russ,
Someone claimed that Cg and the NV30 break DX9 compatibility because they removed the shifting operations from pixel shaders and added arbitrary swizzle to pixel shaders and R300 doesn't have it.

I responded by stating that both DX9 pixel shaders and 3dLabs HLSL *require* arbitrary swizzle and if the R300 doesn't have it, they are not in DX9 PS2.0 compliant. (I think the R300 does have pixel shader swizzle)

I then posted some code showing different techniques on how it can be done via PS1.4. It can also be done on PS1.2/1.3
 
Demalion - I guess we'll just have to agree to disagree. :) I can see the possibility of the scenarios you describe, I just can't agree on the likelihood. I just believe Nvidia will be fair and open in the Cg implementation, because that is what the market will demand. I guess only time will tell.
 
Developers writing games now and in the forseeable future aren't going to write shaders that only run on NV30.

The vast majority of hardware is DX7 and below. Developers will optimize for that first. Then we have DX8 hardware, which if they have extra resources, they will support. DX9 isn't even released yet and there are not "value" DX9 cards, so it will be years before this is even a concern.

Don't you people remember a thread started recently "Pixel shaders, 1 year later" bemoaning the fact that no one is using them! Now we have at theory that requires developers to use NV30 specific Cg features and no hardware is even available, and won't be significantly available for a year.


I will say that most developers won't even use Cg for pixel shaders or write pixel shaders at all. Most will stick to multitexture blending modes and use the TSS pipeline. However, Cg for vertex shaders will be used, and there it can be converted to run on the CPU with software processing quite easily.

But such shaders will run well on normal CPUs before developers will make sure that a 600Mhz GF2MX, TNT, or Rage128 can run them.

Again, this Cg conspiracy stuff looks ridiculous. Developers are laughing at it. They know the reality of the market they must support. Frigging DX9 isn't out yet. No hardware is in the hands of the consumers. By the time any reasonable number of DX9 cards are in consumer hands, it will be a year from now, and by then, who knows what will be the case with OpenGL HLSL or DX9 HLSL. There might even be one single standard.

Mindless conspiracy speculation.


I am done commenting in these threads. It is a stupid waste of time. Over the last few days I've logged way too many hours. In the end, Cg will succeed or fall on the backs of the developers and the market. And even then, Cg may just be absorbed by MS specs. One thing is for sure, it isn't going to be decided by fanATIcs like DT posting conspiracy theories nor by our responses.
 
eek3.gif


Nice closing statement...I'll let you get the last cheap shot in
 
It will not be decided by nvidiots too :)

If we have so much time before any real use of a HLSL and the developers will want to support all hardware avaliable then it is two more more reasons to have a non proprietary open standard HLSL and tools from the begining (do the right thing).

In the end I hope the CONSUMERS will win :)

edited: DT has been more educated than you Democoder.
 
demalion,

If I were to put myself in the shoes of the developer, I cannot imagine using a tool that would effectively make my software Nvidia-only. Although the Glide parallels is used quite extensively (by me as well), let us not forget the Glide was born out of necessity - other APIs were poorly suited to the needs of most developers. To these developers, the choice was simple: provide 3D acceleration through glide, or have no 3D acceleration at all (I know there are exceptions, but they were very few of them). Because Glide was API that was easily supported, many developers overlooked its property nature, preferring to provide 3D support via proprietary API then no support at all.

CG is hardly in the same situation - unlike Glide during its time, it is NOT the only viable HLSL option for the most people. In order to see wide-spread adaptation among the developers, it needs to offer them something to differentiate itself from other HLSL, while not interfering with their number one goal: Sell as many copies as possible. Should CG fail it that regard, it will be abandoned in favor of viable (and fundamentally similar) alternative - something that game developers could not do back in 96-97.

I will however draw a more generic parallels with Glide to demonstrate that the industry constantly re-evaluates it priorities and re-adjusts accordingly, a process well-illustrated by rise and fall of Glide.

Looking back to 96-96, when consumer 3D accelerators began to rise, Glide had its positives and negatives:
+ Easy to use compared to the alternatives
+ Carters to the most powerful hardware available
- Proprietary

To the developers eager to cash in on the new technology they were largely unfamiliar with, positives clearly outweigh the negatives. During the 99-00 period however, things changed dramatically. Glide was no longer night-and-day easier then to use then alternatives. 3dfx hardware was no longer clearly superior. But its still remained propritery (by the time it was open-sourced it was, for all intends and purposes, dead). As the result, Glide was swiftly abandonment.

I contend that should the fears some have in regard to CG materialize, it will follow in Glide's footsteps in no time.

Most of the more drastic speculation in this thread regarding the harm CG can cause is based upon assumption that developers would make an unformed choice in regard to HLSLs; I think they deserve a little credit then that. I said it 3 times already, and I will say it again: the last time I checked graphics industry was still free market-based. In the end, the marketplace will determine how many HLSLs we need and what form they need to take. Some of CG's most vocal opponents argue that we don't need 3 HLSLs - well, if they really think so, why are they so worried? If CGs is not needed, it will die, end of story - no need for doomsday scenarios.

I apologize in advance for multiple typos you have no doubt encountered.
 
Back
Top