How Cg favors NVIDIA products (at the expense of others)

We will revisit this thread again (say 6 months from now)...some people on the forum also have contacts with companies or work for them...and mine happen to be ATI.
ATI doesn't like CG..end of story ....

We will see if the title of this thread shakes out to be exactly that...I would bet $$ on it from what I'm hearing.
 
Hmm...am I an "ATi supporter"? If you would assert so, I'd direct you to do a search on my name at Rage3D, with perhaps the keywords for some hotly contested topic regarding nVidia versus ATi. I'd point out that you'll find ATi criticism along with the "support". You'll also find me doing things like attacking many of the my "fellow" "ATi supporters" in the arguments they present (probably for the same reasons you would), and defending some of my "opponents" in the "nVidia supporter" camp.

I don't think the points I raise about Cg are as simple as a black and white pro/anti nVidia stance, though it might be appealing to simplify and dismiss them.
 
Joe DeFuria said:
I'm still questioning what about the language specification would make it possible to co-opt to make it vendor specific?

There seems to be two conflicting arguments going on surrounding this:

Argument 1) nVidia introduced the Cg language partially becuase they need to expose hardware or capabilities that the other HLSL won't. This implies that Cg HLSL enables nVidia specific functionality...one HLSL is more "suitable" for nVidia hardware compared to another. Thus, by nVidia controlling the Cg language spec, they have been able to specifically tailor it to an extent for their hardware.

Argument 2) HLSLs aren't "implementation" specific. So it doesn't matter who controls them.

How can those arguments be resolved? (Or are those arguments not correct?)

Don't bother...common sense seems to be lacking here on a grand scale.
 
Joe DeFuria said:
I'm still questioning what about the language specification would make it possible to co-opt to make it vendor specific?

There seems to be two conflicting arguments going on surrounding this:

Argument 1) nVidia introduced the Cg language partially becuase they need to expose hardware or capabilities that the other HLSL won't. This implies that Cg HLSL enables nVidia specific functionality...one HLSL is more "suitable" for nVidia hardware compared to another. Thus, by nVidia controlling the Cg language spec, they have been able to specifically tailor it to an extent for their hardware.

Argument 2) HLSLs aren't "implementation" specific. So it doesn't matter who controls them.

How can those arguments be resolved? (Or are those arguments not correct?)

Each argument has been proposed by both "sides", with differing evaluations of what it means. They are indeed contradictory, and, again I mention my previous post, which poses questions that try to resolve this contradiction.
 
The only thing is, the current incarnation of Cg simply cannot expose any functionality that the other HLSL's can't, simply because it doesn't compile directly to machine language, but instead to pre-defined assembly shader languages in DX or GL.

Anyway, I think the main things that we need to pay attention to are the ability for other vendors to add their own custom functions, and for the compatibility between Cg and GL/DX HLSL's.
 
My opinion is it doesn't matter much because:

1) Few games are going to use DX9 features for a while

2) Even fewer are going to go beyond DX9 for a long while

3) If they do go beyond DX9, it's only going to be in a few cases, and it's likely going to be really slow, regardless of the hardware. Even being within the specs is going to be slow, if you are pushing the limits.

4) Most games will be made using Direct3D anyway, so they won't be able to go beyond the specs. It very much seems, for most games, OpenGL support is only an after thought for cross platform porting. ARB is being too slow with advancement. The reasons for this are not important (in this discussion).
 
demalion said:
They are indeed contradictory, and, again I mention my previous post, which poses questions that try to resolve this contradiction.
I'm sorry, I've re-read your post many times, and it still keeps coming down to the co-optability issue, and essentially asking: why risk the co-optability?

Is there some other issue you're addressing that I'm just missing?
 
Colourless said:
4) Most games will be made using Direct3D anyway, so they won't be able to go beyond the specs. It very much seems, for most games, OpenGL support is only an after thought for cross platform porting. ARB is being too slow with advancement. The reasons for this are not important (in this discussion).

Except id engine games, and the engines used by Bioware/Black Isle. If you ask me, thought that may not be many, they are major development houses, in their own ways (FPS and RPG, respectively).
 
Chalnoth said:
Except id engine games, and the engines used by Bioware/Black Isle. If you ask me, thought that may not be many, they are major development houses, in their own ways (FPS and RPG, respectively).

That is partly my point, few developers use OpenGL.

Doom 3 from id will not be going beyond even Dx8 specs. Cg will not be used there. Maybe the next engine will be different, but no one has an idea when that will come out. The best guess would be 3+ years, and both ATI and Nvidia will have at least 4 new cards out by then. The Radeon 9700 will be of little importance by then.

Bioware's next game engine will be anyone's guess. It's going to be years away at least, so the same as I just said about id could be said about Bioware as well.
 
surely Id will switch to DX when it does what JC wants, he isnt going to stick to OGL purely for Linux support is he? Or is OGL 2.0 going to be better than DX9/10?
 
Colourless said:
Doom 3 from id will not be going beyond even Dx8 specs. Cg will not be used there. Maybe the next engine will be different, but no one has an idea when that will come out. The best guess would be 3+ years, and both ATI and Nvidia will have at least 4 new cards out by then. The Radeon 9700 will be of little importance by then.

Bioware's next game engine will be anyone's guess. It's going to be years away at least, so the same as I just said about id could be said about Bioware as well.

Of course. It will take quite a while for any HLSL's to come into major usage in most games. It would require at least DX8 hardware as the minspec. That's still a little ways off. After all, only now are we starting to see games come out that require the original GeForce as the minspec, four years after its release.

But, once HLSL's start to be used, they should dramatically accelerate the usage of new features.
 
RussSchultz said:
demalion said:
They are indeed contradictory, and, again I mention my previous post, which poses questions that try to resolve this contradiction.
I'm sorry, I've re-read your post many times, and it still keeps coming down to the co-optability issue, and essentially asking: why risk the co-optability?

Is there some other issue you're addressing that I'm just missing?

I think asking for a proof of how Cg will benefit nVidia at the expense of others is a fallacy. We have no idea where other vendors will advance technologically...the problem isn't Cg as it is now, which seems to be identical to DX 9 HLSL by all reports, but the lack of assurance that any future changes, or lack of changes, will solely be determined by a party that has a specific interest in including changes that benefit them and excluding changes that benefit others.

In this thread, I address that it seems inappropriate to place the burden of proof on proving how it WILL be co-optable, but how it will NOT. In your other thread, you are more specific in what you are discussing, so I won't be addressing that point of dispute there.

Aside from that, I'm seeking an answer to my "What If" analogy (which while it may not be a technical answer to the thread's posed question, it does seem to me to be a logical one) and these questions:

My question is why risk the above? What is Cg giving us that DX9 HLSL isn't? And OpenGL 2.0/HLSL? I don't see the enhanced support for their enhanced vertex shader lengths and instructions, as a good HLSL shouldn't be that close to the metal as to exclude benefiting from the likely NV30 superiority to basic Pixel/Vertex shader 2.0...the specifications that have been listed are for the "assembly" language, not the HLSL.

Runtime compiling?
Hmmm...won't DX9 offer that? I mean, if DX9 offers that at the driver/API level, what exactly is the point of Cg? It seems like there is no point as far as DX goes, unless it is to coopt the initiative away from DX 9 HLSL by earlier release...or perhaps DX9 HLSL doesn't offer runtime compiling?

OpenGL? Hmm...what about OpenGL 2.0? The capabilities seem similar...is it that the OpenGL 2.0/HLSL won't support runtime compiled?

Maybe downwards compatibility as has been mentioned? Making it runtime allows abstraction that will allow effects to be designed to a high target, and be supported in some form by cards even if they can't handle the full effects load. This does assume that it is the only run time solution however...if it is, perhaps this can be a clear benefit.

Which I don't think have been specifically addressed. At this point I don't see clearly what the consumer and industry is gaining from Cg as opposed to the alternatives.

If nothing, then it just a learning tool and should be supplanted and not used in favor of something that will grow as dictated by the industry or someone who has a vested interest in more than one specific vendor's hardware.

If something, what? Specify that exactly, and we can begin a more productive discussion instead of going back and forth as the thread is currently. I propose some possible answers and my evaluations of them.

EDIT: in case this post is replied to instead of my original (which I would not prefer), I'll add this for the sake of context:

Note, the only reason it is not bad that Microsoft specifies DirectX is because they don't make any video hardware (give them time :-? ).

It just seems to me that the one flaw apparent in Cg is an overwhelming one for a rapidly growing and evolving industry.
 
Chalnoth said:
The only thing is, the current incarnation of Cg simply cannot expose any functionality that the other HLSL's can't, simply because it doesn't compile directly to machine language, but instead to pre-defined assembly shader languages in DX or GL.

Anyway, I think the main things that we need to pay attention to are the ability for other vendors to add their own custom functions, and for the compatibility between Cg and GL/DX HLSL's.
I doesnt have a compiler to the metal now. Will it have one in the future? My guess there is no technicall reason we cannot have a fast, advanced and optimized to the metal Cg compiler in the windows environment later.

I am not a nVidia bansher, I fact I have a GF3Ti200 and probably (hope) in one month I will have a nforce mobo (history for another thread) :)
 
Chalnoth said:
But, once HLSL's start to be used, they should dramatically accelerate the usage of new features.

I'd like to explore this further.

How will these HLSL accelerate the useage of new features? Granted, it might make some effects prettier. But how will it allow new features easier and or more widespread?

The issue I have is graceful degradation. Suppose DX10 comes out, and with it two new high end boards (from SiS and Trident, of course).

Unless the HLSL has some built in graceful degradation for these new features (which, historically, DX doesn't seem to have) what incentive will developers have to target these new high end boards?
 
RussSchultz said:
Chalnoth said:
But, once HLSL's start to be used, they should dramatically accelerate the usage of new features.

I'd like to explore this further.

How will these HLSL accelerate the useage of new features? Granted, it might make some effects prettier. But how will it allow new features easier and or more widespread?

The issue I have is graceful degradation. Suppose DX10 comes out, and with it two new high end boards (from SiS and Trident, of course).

Unless the HLSL has some built in graceful degradation for these new features (which, historically, DX doesn't seem to have) what incentive will developers have to target these new high end boards?

Don't you know you should choose a side and prevent bringing up scenarios that might serve to weaken another argument of yours? Silly Russ... ;) :LOL:

Hmm...how will it be possible to target the high end boards in your scenario using Cg without clearing it with nVidia? You can write your own backend, presumably, but wouldn't nVidia seem to be able to dictate whether or not this became the Cg standard, and hence what developers target? Your scenario would seem to serve as answering your own question, unless I misunderstand.
 
I'm not talking about Cg at all, just HLSL in general.

If I write a shader/neat effect for a DX10 board, it seems I have to also write one that attempts to approximate the same effect from DX9, DX8, DX7, etc.

Or do the HLSL out there take care of this in a meaningful manner?

If not, I think that HLSL will make coding effects easier, but not really so much that it will make HLSL a panacea to slow adoption of new features.
 
Or do the HLSL out there take care of this in a meaningful manner?

Would the language itself care? AFAIK this would be up to the comiler to break this up into as many passes as possible to be able to do that effect - then its up to the developer and/or user to determine if the speed and quality of output is up to snuff.

However, I believe presently Cg compilers don't support multipass(??).
 
When extensions to the Directx spec are brought into use do they pass through the HAL or the HEL? Or do they totally circumnavigate the whole process?
 
RussSchultz said:
I'm not talking about Cg at all, just HLSL in general.

My point is that it seems clear other HLSLs have a clear road of evolution to adapt to this, whereas it seems that Cg would evolve as it suits nVidia and their interests and ability to offer the feature, and not before.

If I write a shader/neat effect for a DX10 board, it seems I have to also write one that attempts to approximate the same effect from DX9, DX8, DX7, etc.

Or do the HLSL out there take care of this in a meaningful manner?

If not, I think that HLSL will make coding effects easier, but not really so much that it will make HLSL a panacea to slow adoption of new features.

My understanding is that the profile overloading, when it is implemented, will allow a custom profile (synonymous with "backend"?) to ignore or reduce the precision of the effect. On whole, an elegant principle I think. The problem that arises is if the maximum specification is inadequate to expose a feature...nVidia will have absolute say in determining whether to adapt the language specifications to this or not.

If developers and the industry cannot freely adapt Cg to higher specifications and support such enhancement on all available tools (which tools aren't "open source"?) without nVidia permission, this is a drawback of Cg. The nature of my comments is that it seems more reasonable for the burden of proof to be to show that developers and the industry CAN freely adapt Cg, or that they will definitely not need to. I don't think the latter can be stated or proved, and therefore is a significant problem with Cg unless the former is true. If it is, that would be productive to demonstrate, but my understanding is that is exactly what nVidia doesn't intend to allow at this time.

It is not that the Cg language itself can't allow this, or that the specification is technically flawed at current, it is that if a limitation is found in future, why wouldn't we have been better off following DX 9 HLSL (which it seems would have the same flexibility, but will have incentive to evolove without regard to one vendor's specific hardware) or OpenGL 2.0/HLSL (which might actually have less limitations, and seems even more likely to adapt and grow)? Again, this reiterates my other post which would be more suitable to use for a reply.
 
Back
Top