Cg released

Is it possible that because the compiler at the moment is in beta stage, that the final release will allow you to write multipass, or is this feature inherent to the language itself and not something that can be implemented?

I guess I'm still confused about the whole extension of Microsoft's HLSL for DX and what role Cg is trying to fulfill in the grand scheme of things, in a programmer's point of view, that is.
 
Having C syntax, it is even more complicated to see the limit. I thought Nvidia/Microsoft is a bit more foreward looking

Looking forward to hardware that's been available for over a year? That doesn't make much sense...

Now we have to change the language in a year or so once more.

No, that's exactly the opposite of the case. New profiles will need to be created, but the language won't need to change. Profiles implement subsets of a very general programming language (the full Cg spec is functionally equivalent to C). Future profiles will expose additional capabilities of the language (e.g., variable loops, longer maximum programs), but a new language will not be necessary.

and what role Cg is trying to fulfill in the grand scheme of things, in a programmer's point of view,

Write an 80 instruction vertex shader and bind a bunch of incoming vertex data channels into the appropriate attribute registers. The current model is a pain in the ass (especially to optimize and debug longer shaders). Now, imagine instead of 80 lines of vertex shader assembly, you had 200+ lines of fragment shader assembly, and in addition to input data such as texture coordinates and colors, you also have a ton of textures to use.

Using cryptic hardware-centric notation (such as glActiveTextureUnit, or DirectX 8's vertex shader header strings) becomes an extreme chore in these cases -- enough to warrant avoiding including such shaders in programs (especially lower-budget programs).

In addition, art flow for shaders has been poorly managed, as the tools (Maya, 3DS Max, etc.) don't map to hardware well at all, and re-mapping Maya materials onto DX8 pixel shaders takes a bit of programmer work for every model, creating a large stall in the pipeline. Putting the same shader technology in the content creation packages as the game code allows the artist to tweak effects to his/her heart's content without needing much, if any, programmer involvement.

the final release will allow you to write multipass,

As shown in the slideshow, CgFX (toolkit 2.0 release) will simplify the process of creating multipass effects; however, each pass will probably need a unique shader written for it, rather than 1 long shader being automatically compiled into smaller passes.
 
I saw this over at extremetech.

This is an interview with Kurt Akeley, one of the co-founders of OpenGL and Silicon Graphics!

ET: When using Nvidia's Cg compiler, if you are doing a run-time compile running on "brand-X GPUs", is the compiled code that is generated essentially generic DX8 or DX9 code?

KA: The beta Nvidia Cg compiler will target DX8 and Nvidia vertex programming extensions under OpenGL 1.3, though it will also support standard OpenGL 1.4 vertex programs when OpenGL 1.4 ships. The parsing is open source and the code generation is not. There is sufficient public information for other vendors to create their own compilers given the open-source parser code and the Cg language specification details that will be made public.

Could some one please explain what this means exactly?
"The parsing is open source and the code generation is not"

So, obviously its not completely open source, but is it open enough to keep other hardware vendors happy?

Fuz
 
Matt Burris said:
Is it possible that because the compiler at the moment is in beta stage, that the final release will allow you to write multipass, or is this feature inherent to the language itself and not something that can be implemented?

I read what another posted said somewhere (I forget who and where, unfortunately), and it makes sense.

Apparently automatically abstracting to multipass will not happen with current or any future HLSL unless scene graphs are used. The reason is simply that if you want to do more than one pass on a single triangle before moving onto the next, you will end up with tons of state changes. It is far, far more efficient to do one pass over the entire scene, then the second pass, and so on. It is impossible for HLSL's to do this properly, as they still only have the data for one fragment at a time to work with.

In the future, scene graphs may allow for automatic abstraction to an arbitrary number of passes, as the scene graph can manage all of the data for the entire scene at once.

Hrm, now that I think of it, there may be another possibility as well. For the programmers among you, imagine something like this (pseudocode):

compiledprog = compile(program)
for j = 1 to compiledprog.numpasses
setprogram(compiledprog.prog[j])
//draw objects
next j

In this way, instead of always outputting one assembly-language program, the compiler could output multiple programs, allowing the program to handle them however it sees fit.
 
Fuz said:
I saw this over at extremetech.

This is an interview with Kurt Akeley, one of the co-founders of OpenGL and Silicon Graphics!

ET: When using Nvidia's Cg compiler, if you are doing a run-time compile running on "brand-X GPUs", is the compiled code that is generated essentially generic DX8 or DX9 code?

KA: The beta Nvidia Cg compiler will target DX8 and Nvidia vertex programming extensions under OpenGL 1.3, though it will also support standard OpenGL 1.4 vertex programs when OpenGL 1.4 ships. The parsing is open source and the code generation is not. There is sufficient public information for other vendors to create their own compilers given the open-source parser code and the Cg language specification details that will be made public.

Could some one please explain what this means exactly?
"The parsing is open source and the code generation is not"

So, obviously its not completely open source, but is it open enough to keep other hardware vendors happy?

Fuz

Hmmmm I may not have to eat my words.... :LOL:
 
Matt Burris said:
PS. oops I failed to notice Cg is actually nothing more than NVIDIAs implementation of m$'s future shading language ... ATI will likely just present its subset/profile of that tailored to PS 1.4 and call it something else entirely. This whole thing seems just a PR stunt.

So basically Cg is just an extension of a language that Microsoft developed, and NVIDIA participated in? NVIDIA made it sound like the other way around ... they developed it, and Microsoft participated. If that's the case, that's really a whole twisted different perspective than the one I had in mind this whole time. :-?

No, its a "subset" meaning that it is a partial implementation of something bigger and better. Or at least that's the impression I get. And if this Cg is just a partial, NVIDIA specific, implementation of something bigger and more general then why bother with Cg in the first place ? Is this some trick to get developers to use NVIDIA optimised code without realising it because they think they use an industry standard ? Or is NVIDIA offering this so developers can have early access to this, as a preview to the final high level language that MS will launch, if this is the case then NVIDIA is definitely PRing in a weird and hyped way...

Shrug...

K~
 
Parsing is the process of turning Cg (or C, or C++, etc.) source code into a directed acyclic graph that represents how program execution flows, with resources created for every symbol (function, variable, etc.).

Code generation is the process of converting the DAG into DX8, NV30, or DX9 assembly code, verifying that there are no statements in the DAG that the selected profile can't support, and optimizing/pruning the DAG to improve performance of the generated code.

Essentially, the compiler front-end is open source. The back end is proprietary.

[ Removed to avoid any potential NDA issues ]
 
Fuz said:
Could some one please explain what this means exactly?
"The parsing is open source and the code generation is not"

Well without getting too technical about compilers, compilers work in stages. A typical compiler could look like this:

Code:
                Lexer
                   \/
                Parser
                   \/
           Type checker
                   \/
Intermediate code generation
                   \/
       Register allocation
                    \/
   Machine code generation
                    \/
      Assembly and linking

OGL and D3D would properbly be made where I wrote Intermediate code generation. If I understand Cg correctly, I only just read this thread. It means that all the "juicy" stuff is closed. Lexing, parsing and typechecking(might be combined to one phase in the Cg compiler) is just where things like comments are removed and the syntaxs is parsed. It is not until code generation that something interesting happens.

Does that answer ur question or just cloud it more ?? ;)
 
gking said:
Parsing is the process of turning Cg (or C, or C++, etc.) source code into a directed acyclic graph that represents how program execution flows, with resources created for every symbol (function, variable, etc.).

Code generation is the process of converting the DAG into DX8, NV30, or DX9 assembly code, verifying that there are no statements in the DAG that the selected profile can't support, and optimizing/pruning the DAG to improve performance of the generated code.

Essentially, the compiler front-end is open source. The back end is proprietary.

[ Removed to avoid any potential NDA issues ]

Excelently described, I have always been a moron at presenting my knowledge. :( Not good at oral exams.
 
Thanks guys.

As usual, the more you learn, the more you realise how little you know.
And ofcourse, the less you know, the more you think you know....err something like that.

Anyway, I have a lot to learn. :(
 
That seems kind of backwards, then. I should think that ATI would need access to the code generation in order to optimize Cg for their video cards.

Of course, it may be possible that nVidia just doesn't want ATI's code generation to be based off of theirs. That would make a fair amount of sense (since code generation is obviously more closely-tied to the hardware).

I do hope that it is possible with Cg for ATI to write their own code generation algorithm that is used whenever PS 1.4 is used as the target, or ATI's OpenGL equivalent (as well as future ATI PS). Otherwise Cg would always be best-optimized for nVidia hardware only, and thus ATI would not want to use it.
 
Just a thought...

Like someone said before, this might be where all the "nVidia making their own glide for NV30!"-rumors came from. It wasn't a new API but a high(er) level language to generate code for different APIs...

...But what if this opens the door for them to actually create a new API? One they'd have full control over and could optimize like hell for their own hardware? Developers could just (not really, but anyway :) ) recompile for this new API and get an instant speed bump!

How much more work would it be for a game developer to add support for an additional API? I suppose not that many would actually abandon DX and OpenGL completely, they'd lose too many potential customers. Unless they were paid by nVidia to make it an exclusive (a la the console industry) of course :-?

Or am I just clueless?


Regards / ushac
 
That would only be the action of a Monpoly. nVidia is far from a Monopoly. And besides, as we've seen in the past, tons of other companies in the industry would cry foul (and some pretty big and influential companies, I might add). Besides, ATI is too big right now for nVidia to succeed at anything that ATI could not also use effectively. If nVidia wants to attempt to make Cg proprietary, they will fail.
 
I didn't mean that they'd make Cg proprietary. I mean they could make a compilator that can genereate not only DX or OpenGL code but also NV-Glide code (or whater it could be called). That way they could optimize every link of the chain from application to screen - you can use Cg to generate code for any API/hardware - but for instance NV30/NV-Glide could give better performance than compiling for NV30/DX9 because they can optimize NV-Glide for the NV30 in a way they can't with DX9. Of course ATI or any other could reply with the same coin and say you get better performance with R300/ATI-Glide than R300/DX9 :p

I don't see why that would be "the action of a Monopoly".

Regards / ushac
 
I do hope that it is possible with Cg for ATI to write their own code generation algorithm that is used whenever PS 1.4 is used as the target, or ATI's OpenGL equivalent (as well as future ATI PS). Otherwise Cg would always be best-optimized for nVidia hardware only, and thus ATI would not want to use it.

Nope, they won’t be allowing that, or at least that’s pretty much what David Kirk just told me.

The other problem is that nVIDIA won’t be releasing a PS1.4 compiler at all either. Their view is “PS1.4 doesn’t allow any new features to be enabled so its not necessary, especially given that nobody is using itâ€￾ (is that the case ATi?). It also seems that nVIDIA believe ATi’s driver should be collapsing multipass PS1.1 operations into 1 pass automatically.
 
I cant be bothered by grammar either, but Dave ... in your message its so atrocious to make the thing uninterpretable :)
 
Bah. Tired. Spent the lat two days with 'a vendor' for, well, a piss-up (!) and then spent most of the afternoon/evening trying to fight through London traffic to talk to Mr Kirk.
 
Dave,

It also seems that NVidia believe's that ATi driver should be collapsing multipass PS1.1 operations into 1 pass PS1.4 automatically.

Is Nvidia serious? One would think that it would make far more sense for nVidia's drivers to convert PS 1.4 operations into a multi-pass PS 1.1 operations, rather than the other way around.
 
Is Nvidia serious? One would think that it would make far more sense for nVidia's drivers to convert PS 1.4 operations into a multi-pass PS 1.1 operations, rather than the other way around.

Well, essentially it does anyway. The point of Cg is not to care what the hardware is doing at all, or how to do it - the developer just says "I want this effect" and the compiler will work out how to implement it in DX/OGL. Now, if that effect happens to be one that needs muliple passes under PS1.1 then the compiler will generate it as such - however if they had support of PS1.4 and PS1.1 then the choice would be made which is the optimal code path for the hardware. In that instance it wouldn't be in nVIDIA's interest to support PS1.4 because ATi's hardware would be doing it in less passes automatically.
 
Back
Top