Cg released

DemoCoder

Veteran
From the docs, it looks like it is compatible with Microsoft's D3DX HLSL and that NVidia and MS collaborated on it. That is, Cg is an implementation of HLSL but instead of just generating VS and PS code, it can generate OpenGL code as well. There are a few incompatibilities right now because HLSL is still evolving, but NVidia says in the end, they will merge and be one in the same.

This seems similar to the NVASM tool they released, where you can use MS's tools to compile vertex/pixel shaders, or NVidia's.

Of course the main difference is that Cg will probably output better code for NVidia cards than D3DX's HLSL compiler. This means of course, that if you want an optimal code path for NVidia, you use Cg to generate NVidia code, and D3DX code to generate the default case.

If ATI is smart, they will release their own HLSL compiler that generates optimal code for R200/R300

The only negative thing is that NVidia doesn't appear to have a pluggable compiler, so ATI et al can't reuse the Cg front-end, but must write their own from scratch if they want optimal output. NVidia should release Cg as LGPL license with source. That would promote way more optimal implementations of HLSL. And of course, if the NV30 is really powerful, NVidia would do much better if people were writing very intensive shaders as it would demonstrate the NV30 and show how other cards are slower or need fallbacks.
 
Well thats a bit of a let down.

From <A HREF=http://zdnet.com.com/2100-1104-935626.html>this article</A> it sounds like NVIDIA will allow competitors to write profiles, wether that gives them a level playing ground as far as optimization is concerned I dont know.

The Cg Toolkit, which includes programming libraries and a compiler for writing specific instructions for Nvidia chips, is available now for developers to download. Seitz said Nvidia will make the basic Cg code available for other makers of graphics chips to write their own compilers.

"We're going to keep our secret sauce, but we'll let other hardware vendors plug their stuff in," he said.

(Im assuming ZDnets interpretation is wrong, plug ins sound more like profiles than completely seperate compilers.)

Marco

PS. hmm Nick Triantos replied about optimization for other architectures on the cgshaders.org forum, and he seems to imply that they wont supply competitors with anything but the language spec ... of course the left hand might not know what the right one is doing.
 
The other problem is, the compiler isn't smart enough to deal with underlying hardware capability differences. When you write vertex or pixel shaders using Cg, you still have to be aware of the target (PS1.0, 1.1, 1.3, 1.4, etc). The compiler will simply generate an error if you use an unsupported instruction or exceed some resource limit. For all intents and purposes, atleast for pixel shaders, the C-style is just syntactic sugar. You still can't use IF, FOR, procedure calls, arrays, etc and you have to know you have only 8 instructions to work with, etc.

I was hoping for something that generates multi-pass OpenGl/DX code to emulate functionality not in the base texture pipeline like Stanford RTSL.

Now, realistically, NVidia has done the right thing, because performance and quality sucks with RTSL right now, so maybe they attempted to build more powerful compiler profiles and found out that they sucked, so they punted. Perhaps they are just waiting for DX9 capable hardware and the DX9/OGL2.0 profiles will be more powerful.

On the other hand, Cg doesn't solve the biggest problem, which is to have a single language for shaders that can gracefully degrade on older hardware and generate good code on newer hardware.
 
DemoCoder said:
From the docs, it looks like it is compatible with Microsoft's D3DX HLSL and that NVidia and MS collaborated on it. That is, Cg is an implementation of HLSL but instead of just generating VS and PS code, it can generate OpenGL code as well.

I just wanted to note that Nick Triantos (NV) replied this on www.cgshaders.org/forums

NVIDIA has evolved Cg in close collaboration with Microsoft, and we've also had the language reviewed by many software developers from game companies, rendering companies, tool providers, etc. Both NVIDIA and Microsoft have made changes to our respective languages, so that the high-level languages are completely compatible. This is one of our fundamental goals with Cg, we want to make sure that developers only need to learn one shading language, but that they can use our Cg Toolkit, or Microsoft's D3DX module, or any other vendors' Cg implementations, if and when others decide to implement Cg compilers.

These other Cg compilers will probable be crucial for ATI, 3Dlabs/Creative and Matrox if Cg turns out to be a big hit amongst developers. But would we rather not have Cg at all? If the other hardware vendors loose on this it would be because game developers are to lazy to use anything other than straight Cg and because the [other] hardware venders are too lazy to fight for good compilers.

Another interesting thing to note: While OpenGL 2.0 might be all fine and dandy it seems that it is the MS D3DX high-level language and the up-comming next gen hardware is really driving the game development. This might be the reason nVidia decided to "kiss up" to MS again.
 
I thought the most interesting part of the announcement was support from the various 3D Graphics tool vendors (MAYA 3DS etc.).

I know that right now we have issues trying to explain to artists exactly how in tool variable will map to in game shaders. If the support is at the level of being able to run Cg code inside of say Maya so my artists have a good soild feedback loop and could possibly even write their own shaders then I'm all for it.

I do agree that it's a pity that it doesn't allow support for multipass to emulate more comprehensive hardware. But I can also see the issues in doing this, partly it's performance and partly it's the limited precision of the intermediate surfaces.
 
I can't see much difference between this and the OpenGL 2.0 shading language.

So what's the point of it? Other vendors might as well just supply OGLSL compilers instead of Cg compilers. I feel like I'm missing something here...

Serge
 
The advantage is that because of the ability to subset the language through the profiles the only real layer of abstraction is that you use variables instead of registers and you can use some complex control structures (but since you have to be aware of how the constrol structures are actually implemented and what their limitations are to use them that doesnt do you a whole lot of good except safe some typing). So there is still an extremely straightforward mapping to vertex/pixel shaders without the need for multipass etc ... so instead of a high level of abstraction we have just about the lowest level of abstraction possible while still being (moderately) usefull :/

As I said, bit of a let down.
 
I think the fundamental problem with these shading languages is that if you don't have a scene graph, the compiler cannot efficiently generate multipass code when compiling high level expressions. The compiler, if it is to generate efficient code, must be able to group together geometry with similar shaders into the same pass, otherwise you get a combinatorial explosion of passes.

Imagine you have two shaders, SA and SB. SA applies to a bunch of objects, call them OA. SB applies to a bunch of objects, OB. There is a third shader, SC which can be done in a single pass and applies to OA and OB, plus some objects OC.

SA is decomposed into two shaders, one for each pass, SA1 (which includes SC) and SA2. Likewise for SB as SB1 and SB2.

The optimal way would be to do something like this:

Pass 1:
Set shader OC, draw all objects OC
Set shader SA1 which is a combo of part SA and SC, draw all OA
Set shader SB2 , draw all SO

Pass 2:
Set shader SA2, draw all OA
Set shader SB2, draw all OB

And of course, you can disable Z, stencil, and other features as required, if you are doing stencil/alpha/z test tricks.

If you don't have a scene graph, you end doing 2 passes over each object, which causes too many state changes, extra setup, API overhead, etc You also can't disable Z writes cause the Z buffer hasn't been completely filled, etc Let's not even get into hanlding transparency automatically. :)

In the future, I think the 3D industry needs both something like RTSL and some kind of scene graph representation.
 
I think the concept of trying to retrofit complex shading routines onto incompatible hardware is a bit nonsensical. Older hardware doesn't have enough raw fillrate to handle all of the passes that would be generated at an even remotely interactive framerate, so trying to force fancy shading features onto non-fancy hardware would result in unplayable games.

The final Cg toolkit will include a mechanism for having a program intelligently switch between different profiles (so that a game can support DX7-DX9) without additional application-level programming.

So there is still an extremely straightforward mapping to vertex/pixel shaders without the need for multipass etc

As shaders get longer (think hundreds-thousands of assembly instructions), debugging, modifying, and developing them becomes nightmarishly difficult without logical abstractions. Since the current Cg release only supports DX8, there isn't a huge advantage to using Cg versus hand-writing the assembly.

I can't see much difference between this and the OpenGL 2.0 shading language.

OGL 2.0 implementations aren't available yet, nor are they likely to be for a while.

And, more importantly, OGLSL isn't supported anywhere but OpenGL 2.0.
 
I wanted to get some opinions from some of you guys here, since it seems you've all seem to have delved into the tools and the language. I've got a few comments in the comments section of my preview post on 3DGPU, and was curious if what these people say is true:

This one person said:

Cg is a Nvidia-proprietary shader language which is limited to the capabilities of Nvidia processors (of course other GPUs can run the code too, as they're more sophisticated on the shader side). But it is impossible to expose specific features not available on Nvidia hardware, even with a custom designed compiler. That's not good. Nvidia is holding back the industry to their lower levels, as the language itself is not open.

In addition to this, the Cg shading language is simply redundant. There will be a DX9 high level shading language as well as OpenGL 2.0. We do not really need a third High Level Shading Language, which is even limited to the hardware capabilities of a certain vendor.

Another said this:

Looks promising for Nvidia's competition. Sounds like a pretty good bonehead move to further restrict game development for an attempt at monopolizing game development to one manufacturer. Make a smaller pasture and fewer horses will be able to graze. As if PC gaming wasn't already hurting enough.

Anyone has any thoughts on these statements? I'm wonder if what the first person said is right and all. Any insights would be really great.
 
The Cg language could be extended to fully support other architectures, the question is to which extent NVIDIA will support that. Will they give other manufacturers the information needed to generate profiles which can be used with NVIDIAs tools for instance (otherwise the automatic switching of profiles in the runtime would be a bit useless, developers would still have to add switching code for other architectures).

Or will they only let other manufacturers just use the Cg trademark for their own extended implementations?

Or???

There is vague innuendo on the matter, but no solid statements AFAICS.

PS. oops I failed to notice Cg is actually nothing more than NVIDIAs implementation of m$'s future shading language ... ATI will likely just present its subset/profile of that tailored to PS 1.4 and call it something else entirely. This whole thing seems just a PR stunt.
 
And, more importantly, OGLSL isn't supported anywhere but OpenGL 2.0.

OpenGL is by definition cross-platform, and well, open... as for availability and "support" (not sure what you meant), i don't see why you would need an OpenGL2.0 implementation to write a shader compiler for OGLSL... AFAICS such a compiler is a component of an OpenGL2.0 implementation, which each chip-maker will have to write.

3dlabs provide source to an OGLSL compiler front-end here :
http://www.3dlabs.com/support/developer/ogl2/sourcecode/index.htm

So my point is basically that this doesn't seem to add anything to the table. Why another shading language, when OpenGL and Microsoft have their own (this one supposedly is 100% compatible syntax wise with Micorsoft's)?

[/b]
 
I dont remember OGLSL having something like profiles and the ability to specify multiple profile specific shaders for the same function (of which the runtime will decide which one to use) which makes it slightly less usefull in the short term.
 
PS. oops I failed to notice Cg is actually nothing more than NVIDIAs implementation of m$'s future shading language ... ATI will likely just present its subset/profile of that tailored to PS 1.4 and call it something else entirely. This whole thing seems just a PR stunt.

So basically Cg is just an extension of a language that Microsoft developed, and NVIDIA participated in? NVIDIA made it sound like the other way around ... they developed it, and Microsoft participated. If that's the case, that's really a whole twisted different perspective than the one I had in mind this whole time. :-?
 
CG is simply Nvidia's tool kit and compiler implementation of the Microsoft DX9 standard HLSL High Level Shader Language. They just announced their implementation before the other 3d vendors and called it CG. Of course they (and other 3d vendors) played a key role in its development, but it is a DX9 open standard.

The purpose of the high level language is to isolate the developers from the details of the current and future hardware shader implementations (PS 1.1, 1.4, 2.0 etc.). By defining a hardware independent high level language, a single shader function can run efficiently on any hardware enabled shader card regardless of the shader versions it uses.

This actually benefits ATI and other 3d vendors as much or more than Nvidia, since it means that almost all developers will automatically make full use of shaders such as PS 1.4 (when ATI ships their own compiler of course). It also gives the hardware manufacturers more flexibility in their future shader designs.

The high level shader language has purposely been kept reasonably close to the current low level shader specifications to avoid poor performance. As the future hardware shader implementations improve, so should the language.
 
nVidia claims to have developed this language, with oversight by Microsoft. It seems probable that the DX9 HLSL *is* Cg. It seems kind of silly to believe that both companies would work hard in creating their own versions of HLSL, and have them turn up identical. I don't see why Microsoft would bother with HLSL when a company like nVidia was going to do it anyway....
 
MfA,

You're right, OGLSL doesn't support profiles (AFAIK) - but it is aimed at a fairly high level of hardware (as it should be IMO). And I suppose the similarity between it and HLSL will allow a common compiler for both, or at least allow much code to be shared between compilers for the respective languages...

I guess it'll be a while before gfx hardware is general enough for a really nice high-level language. And I figure incremental updates to the language will occur as hardware improves (which IMO means it's a little early to be talking about a high-level language)...

Just out of curiosity, imagine you had your ideal chip in your garage (say a really souped up Imagine stream processor with improvements). What would your high-level programming language look like?

Regards,
Serge
 
I think that all three recently announced shader languages (NVIDIA's cg, MS's HLSL, and OGL 2.0's shader language) are inspired by both Renderman and by the recent Stanford University work on real-time shader languages.
 
For me it seems that Cg code could be downward compatible to MS HLSL, but NOT the other way round, as MS' high level language will have more capabilities than Cg. So as soon a developer is using Cg, he will give away the full possibilities of DX.

There are more concerns about the concept itself. Cg compilers don't automaticly generate multipass, as Stanfords HL Shading language. Therefore, it is awful if you write shaders, as you still have to deal with the limited instrcution count on DX8 hardware. Having C syntax, it is even more complicated to see the limit. I thought Nvidia/Microsoft is a bit more foreward looking. Now we have to change the language in a year or so once more.
 
Back
Top