3DLabs Cg Rebuttal

pascal

Veteran
Here is the link (got from nvnews): http://www.extremetech.com/article2/0,3973,183940,00.asp
3Dlabs Letter to the Editor

Contrary to Nvidia's claim, the specification work for OpenGL 2.0 is well along. This week, 3Dlabs has provided the OpenGL Architecture Review Board with specifications for the OpenGL Shading Language and three extension specifications that implement support for vertex shaders and fragment shaders that use this high level shading language. The original OpenGL 2.0 white papers were published nine months ago, and 3Dlabs has been refining those white papers, taking input from public reviewers - including other ARB members, and are now turning them into specification documents.
Contrary to Nvidia's claim, OpenGL 2.0 WILL be 100% backward compatible with existing OpenGL levels. This has been stated in every presentation on OpenGL 2.0 since the beginning.

Contrary to Nvidia's claim, developers WILL have access to low-level hardware features from the assembly level if they so desire. Each hardware vendor will have the choice of supporting their own hardware-specific assembly language or the more common ARB_vertex_program assembly language extension, as they desire. These assembly level interfaces will work seamlessly with the OpenGL 2.0 high level shading language.

Contrary to their implied positioning, Nvidia's is not planning to offer Cg to the OpenGL Architecture Review Board for consideration as a standard of any type. Rather, they have stated that they fully intend to control the specification and implementation. Other graphics hardware vendors would be offered the ability to implement this Nvidia-specified language, under Nvidia licensing terms, for their own hardware.

In contrast, 3Dlabs has diligently worked to move the OpenGL 2.0 effort forward in an open forum, and we have made source code for the high-level shading language available on our web site since April.

In short, 3Dlabs is intent on creating a standard, hardware-independent, high level shading language that will promote widespread application availability and encourage competition among hardware vendors. We have been presenting our ideas to the OpenGL Architecture Review Board and to the OpenGL community for almost a year. Feedback has been overwhelmingly positive.
-John Schimpf, Director of Developer Relations for 3Dlabs

I hope OpenGL 2.0 get a lot of support.
 
So more Nvidia bullshit spreading, tsk tsk.

Rather, they have stated that they fully intend to control the specification and implementation. Other graphics hardware vendors would be offered the ability to implement this Nvidia-specified language, under Nvidia licensing terms, for their own hardware.


Jesus christ, they'll be trying to take over the world next!!


Hehe, Nvidia licensing terms, as in if we think you can compete with us then you dont support the latest opengl standards and everyone will know it and you will fail because of such.
 
rant

All in all what we have heard about the Cg language it is very likely that

Cg is not a sign of strength but an sign of weakness of Nvidia.


They have lost control over DX9 (10?) and they have less influence over OpenGL then their competitors due to their tactics. Now they have to introduce some proprietary language in order to show the strength of their next-gen parts. Looks not good for them.


Manfred
 
Re: rant

mboeller said:
Cg is not a sign of strength but an sign of weakness of Nvidia.

Actually, for good or bad, I think the opposite is true. You can only introduce and gain acceptance on something like Cg if you are an industry leader, otherwise no one would take notice. The number of developers supposedly signing up is rather impressive.

Even if it is the result of fustration with Microsoft/OpenGL, it shows their strength that they can go it alone and get a considerable amount of buyin from developers.

Guenther
 
Re: rant

mboeller said:
All in all what we have heard about the Cg language it is very likely that

Cg is not a sign of strength but an sign of weakness of Nvidia.


They have lost control over DX9 (10?) and they have less influence over OpenGL then their competitors due to their tactics. Now they have to introduce some proprietary language in order to show the strength of their next-gen parts. Looks not good for them.


Manfred

Yes and no.

Yes because I think you're right that nVidia decided to develop Cg at a time (last year spring/summer) where they were loosing influence over DX9 and OpenGL due in part to the clash with Microsoft about the IP of DX8 shaders and due in part with the clash with the other OpenGL ARB-members about IP of nVidias vertex shader implementations. (We all remember how ATI got the open seat in the OpenGL "council").

No because these problems are in part over, since nVidia decided to kiss up to Microsoft and hence could make Cg in accordence too DX9 HLL. Their are also the dominent (like it or not) power in the consumer 3D-world right now, so they drive a lot of momentum - just like Microsoft (like them or not).

Anyway, I get your point.
 
Re: rant

maguen said:
The number of developers supposedly signing up is rather impressive.

what did they do by 'signing up' - they bought a multi-$k sdk, they signed a contract for exclusiveness of Cg-development or what? as re the fact that Cg steers interest among the developers - that's natural - Cg's the first of its kind. expect similar functionality tools from ms due this autumn.
 
And to think I just bought a Geforce4 Ti 4200. It's an awesome card though, a much higher step up from the Riva 128 I had.

Regarding nVidia wanting to control things with Cg, I'm of the personal opinion that they are wishing to control things in the 3d graphics chip market. But I doubt they would have ever done this if they anticipated Matrox coming back and SiS entering the 3d graphics chip market. It seems nVidia is certainly trying to take the industry where it thinkgs it should go, but apparently the industry may force nVidia to support other architectures. Someone at nVidia needs to be fired, they need smart business decisions.

Sonic
 
I dont quite see why anyone would license it, in its present state its hardly state of the art ... ATI could knock of a compiler and profile of the shading language themselves in a pretty short time, cant patent a language. Especially not if its the same language as m$ is going to be using.
 
How come no-one seems to get it?

It's about helping developers.

The CG toolkit helps them leverage the current hw, and future hw, which is expected to be even more flexible, programmable and powerful than say, the p10, parhelia, or r300.

CG, CGfx and the forthcoming MAX, Maya, SoftImage and lightwave support will help developers NOW, on dx8 and dx9-class HW, and more on future dx9 and dx10 hw.

Regardless of anyone's opinion on the OpenGL 2.0 proposal, it solves zero developer problems - it is just a spec and part of a parser at this point.

CG, on the other hand, is done. More profiles will be released with more capabilities going forward, but the language is finished, and the compiler is working. Some have already included it in their engines & tools.
 
It helps create more demand for GPUs

The nvidia back-end spits out standard dx or ogl ( soon ) assembly. It's up the driver to rearrange the assembly for the specific hw.

The dx8 and ogl assembly languages are NOT directly run on the hw, they act like java bytecodes, in the sense that they are rearranged and interpreted by the IHV's drivers.

Even if the nvidia back-end made one set of assembly optimizations for nvidia cards, the ATI driver would just rearrange the assembly to suit the ATI cards.
 
Doomtrooper said:
It's about getting Developers to buy into optimizing their projects on specific hardware, come on whats in it for Nvidia to help ATI or 3Dlabs :LOL:

Are you implying they need assistance? :)

Contrary to what you believe, there is a motive for providing basic support of competitor's products, as well as allowing them to make specific support through their own compilers. It encourages developers to use their product.
 
How the hell can you optimise vertex or pixel shaders in DirectX 8 for specific hardware? There is ONLY ONE rule in DX8 -> Instruction count is all that counts! There is NO way you could access any hardware specific features in DX.
OpenGL is little different. OpenGL back end of Cg currently outputs NVidia SPECIFIC code using NVidia SPECIFIC extensions. If ATI does not make their own back end that will output ATI SPECIFIC code using ATI SPECIFIC extensions then Cg will not be able to compile for ATI and you can not expect that NVidia SPECIFIC code will run on ATI cards at all. If you will compile for STANDARD extension it will run on every hardware supporting STANDARD extension and it will be up to the DRIVER to compile STANDARD code into VENDOR SPECIFIC code.

If you "know" how to optimise DX8 vertex and pixel shaders for specific vendor please enlighten me :rolleyes:.
 
I have a hard time believing that

Are you talking about the driver optimizing the vertex shader assembly?

Believe it or not, that is exactly what happens. There are a lot of specific details about instruction dispatching, parallel execution, etc. that are purposely left out of the vertex shader definition, since
A) it's a lot of detail for a programmer to deal with
B) it's different between vendors
C) it will change from GPU to GPU from the same vendor

The driver analyzes vertex shaders at assemble time (loadProgram/AssembleShader) to reorder opcodes to best make use of the hardware.

Since it's really hard to optimize for one set of hardware (say, NVIDIA) while deoptimizing for another (say, ATI), it's unlikely that any shader compiled into a common assembly language (DX8, ARB_vertex_program) would be noticeably less optimized on one architecture than another, due to post-compilation driver optimization.
 
MDolenc said:
If you "know" how to optimise DX8 vertex and pixel shaders for specific vendor please enlighten me :rolleyes:.

Two words : instruction ordering.

Cg->DX8->NVIDIA HW

CG->DX8> ATi driver reordering the instructions->ATI HW.

See the extra bit there ?

Try playing with dependant instructions and order of instructions in the vertex shader... you'll find differences in speed (don't be surprised to see double the speed).

The DX8 PS1.1 Pixel shader is soo limited in terms of instructions that there is very little to optimise... PS1.4 on the other hand and PS2.0 ...

K~
 
Kristof said:
The DX8 PS1.1 Pixel shader is soo limited in terms of instructions that there is very little to optimise... PS1.4 on the other hand and PS2.0 ...
I was under the impression that Cg supported PS 1.1 only in its current form.
 
Back
Top