Cg released

Currently, multiple passes are not supported. In fact, an error is not even generated at compile time when the program is too long (it is generated at execution, and this is among the things to fix in the Cg compiler). Multiple passes support would likely require a different interface between the program and the Cg compiler, as well as developer support for multiple passes.
 
I should think that ATI would need access to the code generation in order to optimize Cg for their video cards.

Without revealing too much, the code generation happens in the profiles, and the compiler back-end will be given the entire DAG to handle and optimize however it wants.

Given the compiler front-end, ATI can develop a profile that targets their hardware and at worst provide it as a separate compiler executable. However, drivers already optimize DX8 assembly shaders for whatever hardware is installed, so there's no reason to assume that ATI would need to create their own DX9 profile, and once a standard OpenGL fragment extension is available, it should work similarly.
 
gking said:
I should think that ATI would need access to the code generation in order to optimize Cg for their video cards.

Without revealing too much, the code generation happens in the profiles, and the compiler back-end will be given the entire DAG to handle and optimize however it wants.

Given the compiler front-end, ATI can develop a profile that targets their hardware and at worst provide it as a separate compiler executable. However, drivers already optimize DX8 assembly shaders for whatever hardware is installed, so there's no reason to assume that ATI would need to create their own DX9 profile, and once a standard OpenGL fragment extension is available, it should work similarly.

Show me how to create a profile ? Where is it in the docs or where is the source code to add the profile to ? Where can I see the front end ?

As far as I can see all that NVIDIA has given is a definition of the syntax of which they have full control (so don't count on seeing functionality that NVIDIAs hardware does not support).

From posts on the cgshaders.org forums I get the impression that Cg is a subset of something bigger, being the Microsoft Higher Level Shading Language which will, I assume, be introduced at some time. This Microsoft language will not be NVIDIA controlled, the syntax wil be Microsoft controlled meaning that there is a bigger chance of getting new functionality in, no matter who might support or not support it. So Cg seems to be NVIDIAs specific version of this, if this is indeed how it is then why push developers to support the subset of funtionality that Cg is? Developers should support the bigger thing, granted Cg seems to be released first which given developers something to play and try with, but definitely not something they should stick to. Given this "subset" case I doubt that any vendor will bother to write a compiler for Cg unless its easy to do given the similarity to the Microsoft language, or if enough developers fall for it and use this Cg.

Suggesting that ATI should automatically collapse multipass into a single pass is funny, I guess he also thinks that PowerVR should automatically collapse multipass rendering so that it can use the 8-layer multitexturing that it supports. To collapse multipass you need to recognise that the geometry data is exactly the same (position, lighting, fog, texture coordinates,...) just with different texture layers and blend modes. Doing this requires buffering and a lot of analysis (is it all the same ? also remember that multipass geometry is not submitted immediatly after each other, you can do pass A, some other stuff, pass B to have better re-use of states - try recognising that A and B belong together in a single pass in an efficient way). There is no quick or efficient way to do that. As suggested before its much more trivial to do things the other way around : start from the high level and build down to lower levels of capabilities (which in essence is what Cg tries to do or should do). Code for 8-layer multitexture and get the driver/api/compiler to generate multipass if needed (this is fairly easy to recognise, generating the new blend operations however might not be that easy or even possible). WHat NVIDIA seems to propose is that you have Cg->NVIDIA Shader Code->ATI Compiler->ATI Shader Code. Sounds like a way to slow down the runtime drivers...

Cg is a good principle, it is something that can be very powerful and something the industry needs. However the industry does not need this coming from one hardware vendor who holds control over the syntax and compiler possibilities. OpenGL2.0 ARB seems a much more open true standard, similar for the Microsoft High Level Language (although that one might also be controlled by one company, but at least the compiler possibilities will be there and MS at least listens to all parties).

Can anyone point out where it says in the docs or websites that competitors are invited to write compilers and how they should proceed and how they can integrate with the whole that is NVIDIAs Cg ?

K~
 
Kristof, it would be nice if everyone could use OpenGL2.0 or DX9, but the reality is, they aren't here, and neither is any DX9/OGL2.0 hardware.

And even if OGL2.0 arrives in 2003, and even if hardware that can run OGL2.0 arrives in 2003, the vast vast majority of developers will be working with DX8 hardware. Thus, developers need a tool *NOW*, not 18 months from now, to write games to target todays hardware. If they wait until OpenGL2.0 arrives with compliant HW, it will be 2005 when the first OpenGL2.0 optimized game hits the shelves.


Frankly, I don't care what other vendors think about Cg. Fact is, if I was writing a game, I would use whatever 3rd party high level tools that are out there to assist in production. NVidia is shipping such a tool to create DX8 vertex/pixel shaders *TODAY*, one that has bindings for the most popular 3D modeler programs on the market for artists.

If I had to hand-tweak shaders for ATI, I'd do that by hand later after benchmarking. But as Knuth opined in the Art of Computer Programming, premature optimization is the root of all evil. I want good, maintainable code, first. I want code that works with 3rd party 3D Studio tools. I can hand-tweak the pixel shaders for 1.4 long after the artists have done their job.


It's great to talk the potentialities of vaporware APIs and tools, but people in the meantime have to write games and can't wait until some future standard is approved and adopted by everyone.
 
It's great to talk the potentialities of vaporware APIs and tools, but people in the meantime have to write games and can't wait until some future standard is approved and adopted by everyone.

Carmack seems to be doing just fine without CG i.e Doom3, so I assume the current API standards are not that limiting.
:-?
 
http://www.codeplay.com/press/cg_rel.html

LONDON, UK - 14th June, 2002 - NVIDIA® produce capable graphics hardware that, since the introduction of the GeForce 3, contains a degree of programmability. It is programmable in the powerful but low-level pixel and vertex shader level, these being hardware-specific features of their products.

NVIDIA® are now advocating a simplified cut-down language, known as Cg, which allows programming of those low level pixel and vertex shaders. They are promoting this as an open standard, being suitable to program all 'GPUs' or graphics processor units.

However, not all GPUs are created equal, and differences will become ever greater. With the PlayStation®2, SCEI demonstrated a programmable graphics pipeline from higher up in the rendering process, which has different demands of a graphics programming language.

In the future, graphics hardware will incorporate both low level pixel and vertex shaders as demonstrated by NVIDIA® and higher-level general programmability as demonstrated in the PlayStation®2. The Cg language is not sufficiently well specified for such hardware, particularly with reference to:




No break, continue, goto, switch, case, default. These are useful features that can be used without penalty on other vector processors.

No pointers. This is Cg's most serious omission. Pointers are necessary for storing scene graphs, so this will quickly become a serious omission for vector processors that can store and process the entire scene or even sections of it.

No integers. This may be appropriate to NVIDIA®, but is not a universal design decision.

Arrays use float indices. This is an odd design decision, relevant to DirectX 8 and NVIDIA® only.



NVIDIA® have introduced Cg as a standard, fully backward- and forward-compatible. However, the existence of reserved keywords (such as break and continue, mentioned above) is a clear indication that functions will be added when NVIDIA® hardware supports it. This is not conducive to future compatibility.

Codeplay believes that Cg is inadequate for some current, and more future, ‘GPU’s. Most importantly, standard rendering code needs to move onto graphics processors, and Cg is not sufficiently flexible for this type of code.

Codeplay Director Andrew Richards says, “Overall, Cg is not a generic language for graphics programming; it is a language for NVIDIA®’s current generation of graphics card. It has a different set of goals from what’s required to design computer graphics at the heart of the successful computer games of tomorrow.â€￾
 
DemoCoder said:
It's great to talk the potentialities of vaporware APIs and tools, but people in the meantime have to write games and can't wait until some future standard is approved and adopted by everyone.

I just don't know what to think about it...

Current availability is probably one of the only things that plays in favour of Cg, which is why I hope its only going to be used as some precompiler used to generate some shaders that are then modified/tweaked by hand as seen fit by the developer. The alternative of writing the shaders by hand in assembler is still there; for year developers have asked for more flexibility and control, the low level gives them full control - and for games where performance is often the key point, hand written and tweaked shaders will remain "the" thing to use for quite a while. Most shading that you'll see in games is not going to be such high-end that it requires a high level language, we have not quite reached that level of performance where shaders are soo complex that you lose control or spend hours and hours trying to get the shader written (its actually much easier to write a shader than create a multistage blend setup as used by DX7). The only thing thats tricky these days with shaders, and especially pixel shaders is fitting what you want to do in the limited amount of instructions, and somehow I don't think the compiler will perform miracles (I as sure that some shaders can be created by hand within the instruction count limit but that the compiler fails to generate it).

If you write a shader by hand today its not going to turn into rubbish next year (assuming that DX will maintain its assembler language). Its not a lost resource and most game engines probably already contain code to manage shaders in libraries, possibly even with some levels of abstraction.

Don't get me wrong Cg is a great idea and its the way forward, the way its all being introduced and hyped is just wrong. This is not something we want to be stuck with just because its the first thing out of the door.

Cg IMHO play with it, use it to generate some shaders for you and use the results as you see fit. Don't rely on it being the future of writing shaders since thats hopefully where true standards will come in, and these just take longer because they are standards where several parties have to agree (there are always some pros and cons with approaches).

I think the codeplay PR sums up why no other hardware vendor, most likely, will like Cg and why it is not a standard and why developers should approach it with caution. Afterall do you want to rewrite all your shaders into another high level language when NVIDIA decides that Cg did not work out as they hoped and dump it, or when you discover that the other languages have better compilers ?

Note that I am not talking as PowerVR, just approaching this from my point of view as developer (software/hardware) and technical editor. So Democoder where you would jump on it and use it (?) I would play with it but not really use it in my game engine, although I get the impression that that is also more or less what you suggest...

K~
 
Doomtrooper said:
It's great to talk the potentialities of vaporware APIs and tools, but people in the meantime have to write games and can't wait until some future standard is approved and adopted by everyone.

Carmack seems to be doing just fine without CG i.e Doom3, so I assume the current API standards are not that limiting.
:-?

Limiting no. Complex, yes.

Remember, Carmack isn't your run of the mill software engineer who'd rather re-use than innovate. He doesn't have deadlines or bosses; he seems to enjoy pushing things to the limit and has the luxury of being allowed to do that.

But anyways, that isn't whats important. Remember, Cg doesn't output DX C code or OGL code snippits. It outputs vertex shaders or fragment shader assembly code which run entirely on the GPU. I guarantee that your average software engineer would much rather write these in C than assembly, and their (the engineer's) efficiency is greatly enhanced by doing so.

Additionally, if games use this technology and use the run-time version of it, games actually could improve with age as either the backends get optimized, or new hardware that can more accurately/quickly implement the desired effect shows up.

The advance of software engineering is always going to be toward more abstraction, and I think this or something like this is the natural progression. I don't know about the upcoming microsoft high level language, or OGL2.0, etc. I don't care about the politics involved. All I know is this CONCEPT should improve the quality of games and accellerate the adoption of new hardware features.
 
Well the issue is this IMO...I am a Game Developer and I'm about to start coding the engine.
By the year 2003 we will have three standards from what I understand..

1) Nvidia CG
2) Microscofts HLSL
3) Opengl 2.0 HLSL

There is a problem here...no
nixweiss.gif
 
When evaluating codeplay's response to Cg, remember that they writes compilers for PS2. Might they have their own HL compiler upcoming?
 
Show me how to create a profile ?

Sorry - can't answer that question.

From posts on the cgshaders.org forums I get the impression that Cg is a subset of something bigger, being the Microsoft Higher Level Shading Language which will

Nobody who has any actual knowledge of the future of Cg and HLSL have said anything to this effect, since both are NDAd. Just because a bunch of kids make a bunch of negative posts doesn't make something true.

the syntax wil be Microsoft controlled meaning that there is a bigger chance of getting new functionality in

It's worth pointing out that "adding functionality" to DX often takes months after hardware is available because it *is* Microsoft controlled, and even then not all functionality is added, as Micorosft plays political games with hardware manufacturers. The register combiners in NV1x and NV2x series chips are far more powerful than what Microsoft exposed with texture environments and pixel shaders.

The Cg specification is a nice, general-purpose language. Specific GPU functionality is added in a different way (all the tex lookup instructions for the NV2x profile are part of the NV20 standard library). This is why it only seems like a small, relatively inconsequential language, now.

Carmack seems to be doing just fine without CG i.e Doom3, so I assume the current API standards are not that limiting.

Yep, you can always write shaders in assembly code, and optimize them individually for different platforms. In fact, why write programs in C at all? We can do everything we need in assembly, and custom-tailor each version to its host processor.

Just because Carmack can do something doesn't mean Joe Developer can (or has the budget to, or is willing to spend the time to) do it, too. Besides, Carmack creates his own "shading languages" for his games. You do realize that Quake III shaders were a (very) rudimentary high-level shading language, right?

I think the codeplay PR sums up why no other hardware vendor, most likely, will like Cg and why it is not a standard and why developers should approach it with caution.

The codeplay PR is questionable, at best, and seems to miss the point of Cg. Three of the four problem areas are reserved for future use (break/switch/goto, integers, and indexing into arrays using floats (a requirement of the NV20 profile, not of the language)), and the ability to traverse scene graphs doesn't really belong in a shading language, anyway. For ray tracing GPUs, better functionality would be achieved by adding a "trace()" function (similar to RendermanSL) for whatever profile supports raytracing, and let a highly optimized ray server (presumably another section of the GPU, or another chip entirely) handle the intersection test. Developers shouldn't be forced to traverse the scene graph themselves.
 
It's worth pointing out that "adding functionality" to DX often takes months after hardware is available because it *is* Microsoft controlled, and even then not all functionality is added, as Micorosft plays political games with hardware manufacturers. The register combiners in NV1x and NV2x series chips are far more powerful than what Microsoft exposed with texture environments and pixel shaders.

This is exactly what I was afraid of, exposing a advantage by using a compliler optimized for a specific brand of GPU. So other cards be it Parhelia, R300 and P-10 would not run a game the same speed/effects if it was compiled on CG...correct :-?

So now its not Microsoft controlled but Nvidia controlled...difference is Microsoft doesn't manufacture and/or design video cards :-?
 
How would trace() work without a scenegraph? Building an internal scenegraph for each frame from input might be acceptable for Renderman renderers, but its both more difficult (because of the multitude of ways vertex shaders can treat geometry) and less acceptable for Cg.

Developers shouldn't be forced to traverse the scene graph themselves.

So? The question is not if they should be forced to do so (the only way to stop forcing them to do it themselves would be to give them a scenegraph API which Cg is not) but wether they should be able to do it on the graphics hardware through some supposedly general purpose language.

Another example which shows the usefullness of a true general purpose language, even if you dont do full blown scenegraph traversal on the hardware hardware occlusion culling through bounding volume tests would be a lot more usefull if you could tell the hardware to itself perform :

if draw(bounding_volume) do ...
 
How would trace() work without a scenegraph?

You can still have a scenegraph on the GPU; however, you don't need it available in the shading language. The shading language is just that -- how light interacts with surfaces. The ray server hardware would already be necessary elsewhere in the pipeline (i.e., the initial ray traversal), so duplicating that functionality by expanding the shading language to also handle ray-surface intersection tests would just make everything needlessly complicated.

exposing a advantage by using a compliler optimized for a specific brand of GPU.
Why should progress be tempered because Microsoft wants to play video card manufacturers off each other? If one GPU has a collection of advanced features that aren't present in competitors' chips, and they aren't difficult to use (if exposed), why not expose them in the best way possible for developers?
 
How would you have a scenegraph on the GPU without a scenegraph API? The only possibility is to build one each and every frame from post-transform vertex data.

As for scenegraph APIs, I like em and all ... but you would think that someone who is so heavily promoting a shading language which is built on the premise that one size does not fit all (which is what the profiles are for) you would see why some would think that the scenegraph's structure and traversal should be up to the developer themselves.
 
Why should progress be tempered because Microsoft wants to play video card manufacturers off each other? If one GPU has a collection of advanced features that aren't present in competitors' chips, and they aren't difficult to use (if exposed), why not expose them in the best way possible for developers?


The entire idea behind Opengl and DirectX was to allow developers and even playing field for all hardware, so what you are saying is ATI should develop their own compilers so now we have..

1) Nvidia CG
2) ATI CG
3) PowerVR CG
4) Matrox CG
5) 3Dlabs CG
6) Microsoft CG
7) OpenGl 2.0 CG

I don't think developers are going to support this trend ??

What I'm saying is a graphics card company is trying to own the developer community using optimized code for their GPU, and to me I don't want to be limited to one video card seletion.
I can see it now on the back of games...Optimized for Nvidia Hardware CG enhanced...
 
gking said:
From posts on the cgshaders.org forums I get the impression that Cg is a subset of something bigger, being the Microsoft Higher Level Shading Language which will
Nobody who has any actual knowledge of the future of Cg and HLSL have said anything to this effect, since both are NDAd. Just because a bunch of kids make a bunch of negative posts doesn't make something true.

I will quote this (from CGShaders.org forum):

My name is Nick Triantos. One of my responsibilities at NVIDIA is managing the development of many parts of the Cg Toolkit, including the lanugage specification, and the compiler. I wanted to address one of the points made on this thread:

Anonymous wrote:
Why do you insist on calling Cg a standard ? Its created by one company without any input from others. NVIDIA makes it sound like they worked with MS on this but all they did is check with MS to make sure that the code that is generated is actually compatible with the MS Pixel/Vertex Shader Instructions.


Actually, you're not correct. NVIDIA has evolved Cg in close collaboration with Microsoft, and we've also had the language reviewed by many software developers from game companies, rendering companies, tool providers, etc. Both NVIDIA and Microsoft have made changes to our respective languages, so that the high-level languages are completely compatible. This is one of our fundamental goals with Cg, we want to make sure that developers only need to learn one shading language, but that they can use our Cg Toolkit, or Microsoft's D3DX module, or any other vendors' Cg implementations, if and when others decide to implement Cg compilers.

The current Public Beta of Cg Toolkit isn't quite 100% compatible, since there were some last-minute differences we found, but the spec is reasonably up-to-date, and should accurately reflect the fact that Cg is compatible with Microsoft's HLSL.

I thought it was worth me trying to clarify this a bit for everyone.

Happy Cg'ing,
-Nick

I would say that NVIDIAs Nick is contradicting what GKing says. Or would you call Nick a kid making negative posts ? ;)

Or am I mis-reading/understanding things ?

K~
 
As for scenegraph APIs, I like em and all ... but you would think that someone who is so heavily promoting a shading language which is built on the premise that one size does not fit all

And Cg is not trying to be a scenegraph API -- it's a shading language, nothing more, nothing less.

am I mis-reading/understanding things ?

Yep, pretty much. HLSL syntax and Cg syntax will be 100% compatible in the final releases. Depending on profiles used in Cg, the supported features may be a subset _or_ a superset of DX9 HLSL.
 
Regarding the use of multiple compilers for a common high level shader language.

This is actually a much better situation than now exists. As it is now, developers must manually optimize for each type of 3d card. This is simply too expensive. As a result, they optimize only for a few of the most popular cards (and believe me, this is still a lot of work). Allowing developers to write their code once in a common high level shader language and having the hardware vendors provide their own optimizations via compilers means that most new software will be fully optimized for all current 3d hardware vendors by simply recompiling for the appropriate vendor as needed.
 
Back
Top