NVidia Cg...now we know where the "Glide" rumors c

http://pc.ign.com/articles/361/361823p1.html

Not much there to really go on, but I presume that this is what I've been hearing that nVidia will "announce" on Thursday, so the details should come out then.

IF this is something that will be competing with Direct X and OpenGL 2.0, (which as we know are both moving to "C like" programming interfaces,) then this is a "bad thing."
 
I guess all those engineers they've been stealing/buying have to do something. nV is trying to head MS off at the pass, it seems.

Meh, I'm too busy anticipating future 3D hardware to worry about a budding rivalry in 3D API's. ;)
 
The part of that article that concerns me is:

The difference comes in the abstraction of the code to a specific GPU or API. Currently assembly language code is specific to distinct computer architecture. Cg's compiler performs all of this "heavy lifting" itself by abstracting the code and optimizing and tuning it to a particular GPU.
 
On the sound of it this is not nearly as low level as OpenGL 2.0, sounds more like a production ready version of Stanford's shading system.
 
Yeah, me too. Although there's also this:

The software is compatible with almost anything. Since the Cg compiler supports the same language used by Direct X's compiler, compatibility won't be an issue. Moreover, the output to Direct X or OpenGL shaders makes Cg friendlier to other platforms like the Xbox, Mac or Linux.

From the brief article, it's not clear to me if this is some full-blown, competing API, or some tool that is supposed to work in "conjunction" with the APIs. (Or both?) If I'm reading this right, then one could code using "Cg Shading language", then comile it with Cg, and use the results in DX / OpenGL? (Compiles to DX "assembler" interface, or OpenGL extensions?) (Or that this could be a "wrapper" on top of GL or Direct X?)

Sounds like an unnecessary layer to me, making problems that much more difficult to debug. DirectX is supposed to have it's own High level language, 3D Labs has been DILIGENTLY working on the OpenGL 2.0 language. It's not that the "C-Like interface" isn't coming....it's just that nVidia wants to control it?

Then again, we don't know what kind of licensing and development (open Source?) model Cg will have, etc. I'm trying not to "worry" myself until Thursday, at which time we should hopefully have enough details to praise nVidia for its innovation, or damn them for trying to bully its way into more direct control of 3D....
 
What could be the motivation of this? They always seemed to be concerned about protecting IP so maybe they think they can get around disclosing their chip architecture to others?
 
On the sound of it this is not nearly as low level as OpenGL 2.0, sounds more like a production ready version of Stanford's shading system.

3DLabs has briefly touched on Stanford's system with relation to the GL proposed Shader Language in their shading language whitepaper:

http://www.3dlabs.com/support/developer/ogl2/whitepapers/OGL2_Shading_Language_1.2.pdf

(See page 86).

Interestingly, as a foot-note:
...The Stanford group acknowledge that the language is pragmatic and not something worthy of proposing as a standard.

So, if Cg is a "production ready" version of the Stanford language, that means it's considerably different than the "non-production" version. ;)
 
More abstract than assembler, yes. Where do they say it's more than just a vertex/pixel-C to vertex/pixel-asm compiler?

Isn't Stanford's shading system supposed to be more than that?
 
Guys, guys. This sounds like nothing more than a tool + utility library like D3DX or GLUT for programmers to use on top of DirectX or OpenGL. It is not replacing DirectX or OpenGL, it is encapsulating it at a higher level, in the grand tradition of computer science. It is no different than the plethora of high-level DX/OpenGL libraries floating around that help with the drudgery of setting up scenes, programming effects, etc and has no comparison with 3dfx's glide, since it is not a low level API replacing the standard immediate mode APIs, but a layer on top of them.


OpenGL2.0 or DirectX9 will still be needed, because presumably, CG's compiler outputs code to take advantage of whatever shaders are exposed by lower level hardware APIs. If you have < DX8 hardware, CG probably renders the same effects, but using multipass with DX7/OGL 1.0, which results in a slowdown and artifacts. If you have DX9/OGL2, presumbly CG will have a compiler backend which generates efficient DX9/OGL2 shaders.

Of course, if NVidia releases it as Open Source, or has a pluggable API, than any vendor, be it ATI, Matrox, etc can write a backend generator that outputs code for a hidden proprietary extension, and they would do this until OpenGL2.0 is ready in which case, without rewriting a single line of code in your game engine, you could probably recompile with output to OGL2.0 instead of "proprietary hidden extension"


Think of it as Java for GPUs. It's a language/framework for abstracting the underlying hardware and APUs with a compiler that generates code for each particular piece of hardware. Except for the fact that NVidia owns it, this type of library is exactly what's needed to abstract over the rapid featuritis that is creeping into graphics APIs.

Now, rather than writing different code for DX7, PS1.0, PS1.1, PS1.4, OGL, OGL2, OGL + (insert extension here), you write it once, and let the compiler deal with the rest.
 
Basic : thats what I meant, 3Dlabs ackowledges Stanfords system has a higher degree of abstraction (in section9.1).
 
Now, rather than writing different code for DX7, PS1.0, PS1.1, PS1.4, OGL, OGL2, OGL + (insert extension here), you write it once, and let the compiler deal with the rest.

That sounds very much like whats trying to be achieved with OpenGL2.
 
I think there're some differences. OpenGL 2.0 is still an API, with C-like shader languages. It looks like the CG abstracts the API layer, not just the shader language.
 
DaveBaumann said:
Now, rather than writing different code for DX7, PS1.0, PS1.1, PS1.4, OGL, OGL2, OGL + (insert extension here), you write it once, and let the compiler deal with the rest.

That sounds very much like whats trying to be achieved with OpenGL2.

Except that:

1) OGL is a hardware level API and is designed to match up closely with what the underlying hardware has to implement.

2) There is always a need for higher level abstractions, and OGL2 sounds like it is lower level than Stanford's real time shading language. Remember, we want Renderman Shader-style flexibility.

3) Even if OGL2.0 achieves what it sets out to do, and I hope they do, it doesn't help the developers who want target multiple APIs and multiple platforms NOW. How soon will there be *drivers* and *hardware* that implement the complete OGL2.0 functionality available for all vendors, and for Mac, PlayStation2, X-Box, GameCube, Linux, etc?

Many developers want to start writing complex shaders right now, but can't wait 2 years for OGL2.0 to mature. CG's higher level layering in the API stack means that you can write to its language/API now, and take advantage of both today's hardware and OGL2.0 when it becomes available.

4) DirectX is still a force to be reckoned with. What if you want to support a card that doesn't support OpenGL?


5) Would you argue that they move GLUT's functionality into the base OpenGL API? Would you advocate that everyone program in Assembly Language!?? You don't always want a monolithic API. What you want is a low level API that exposes *powerful* general purpose hardware and then higher level APIs that make it easy to code effects.

CG sounds very high level. If CG was called "OpenCG" and was a higher level code generation tool for generating OGL2.0 shaders, endorsed by OpenGL, I suspect this discussion over competiting APIs or duplicate functionality wouldn't exist, as people would see that it's just a developers tool, not a device driver API that talks directly to hardware.
 
I do have to wonder what kind of recourse nVidia has for people breaking NDA's. My first guess would be that the website would no longer have a chance to enter an NDA with nVidia, though I wonder if other legal action can be taken?
 
1) OGL is a hardware level API and is designed to match up closely with what the underlying hardware has to implement.

2) There is always a need for higher level abstractions, and OGL2 sounds like it is lower level than Stanford's real time shading language. Remember, we want Renderman Shader-style flexibility.

Again, from what 3Dlabs were saying OpenGL2 has been designed higher then the level of functionality than will be supported in hardware for the next two years and its up to the vendors compilers to sift out what can be sorted in hardware and what can’t.

3) Even if OGL2.0 achieves what it sets out to do, and I hope they do, it doesn't help the developers who want target multiple APIs and multiple platforms NOW. How soon will there be *drivers* and *hardware* that implement the complete OGL2.0 functionality available for all vendors, and for Mac, PlayStation2, X-Box, GameCube, Linux, etc?

How is that different from Cg? It may operate outside of current API’s, but the other vendors will have to provide compiler support for it as well – for the others what’s it better to focus on initially established API’s with Shaders languages or this?
 
[quote="DaveBaumannHow is that different from Cg? It may operate outside of current API’s, but the other vendors will have to provide compiler support for it as well – for the others what’s it better to focus on initially established API’s with Shaders languages or this?[/quote]

The difference is, with OpenGL2.0, you have to write your own ICD implementing everything. Presumably, with CG, it functions like a typical compiler with a backend, wherein it is relatively easy to change it to generate code for other architectures, and presumably, NVidia will release it with code generators for DX8, 8.1, 9, and OpenGL, and if the API is pluggable, independent people will write generators for PS1.4, etc.

With OpenGL2.0, it is unlikely that some student somewhere is going to write an ICD to handle translating OpenGL2.0 shaders to some non-supported platform, like say, Kyro.


Secondly, when will OpenGL2 be available? Will it ship this fall?

Third, why do you feel this is a threat? OGL2.0 will proceed as usual, and when it arrives, developers will have the choice of using it, or using a wrapper API. Many developers today don't write pure DirectX or OpenGL code, but use wrapper APIs and utility libraries, or low level rendering engines that they buy from middleware providers. If I code using PowerRender, I don't need to write much DX or OGL code, but it's not a threat to OGL. Likewise, if I code using CG *NOW*, later on, and I mix and match CG and OGL routines as I wish.

It's not the EITHER/OR situation you are making it out to be.
 
Back
Top