3DLabs Cg Rebuttal

Kristof,

Drivers already reorder instructions, so that "extra" step is going to happen regardless of IHV.

How well the driver does this varies; however, it's safe to assume that data hazards, conditional writes, and division/exponentials should be avoided as much as possible on all hardware. Even if one set of hardware handles one or more of these more efficiently than a different set of hardware, you still want to avoid these wherever possible, since the goal of the compiler is to achieve ideal shader performance, and introducing unnecessary data hazards doesn't do this.
 
Kristof: And how the hell can compiler know what hardware will the shader run on? Let's say Cg can "predict" that it will be run on NVidia hardware, but how can HLSL in DX9 know what it will run on? Look into the register to see what card is instaled (since everything that "compiles" is in D3DX anyway)? If you look it this way then Cg will be optimised for NVidia, DX9 HLSL will be unoptimised and if ATI will throw somthing out then it will work best on ATI cards.
And if that happends you can forget about developers using high level languages.
 
Right now, each vendor's DirectX drivers contain "compilers" that take vertex shaders and pixel shaders and "compile them" optimally for the internal hardware they are running on. There is an NVidia compiler and an ATI compiler. They are hidden behind DX8.

DirectX9 contains a "generic" High Level Shading Language compiler that produces vertex and pixel shaders, but it is not optimized on a per vendor basis right now, but is a D3DX utility.

Cg is NVidia's HLSL compiler. It produced VS and PS for older hardware as well as OGL. The only caveat is that the length of the shaders and the instructions they use are a subset of HLSL.

Each and every vendor (ATI, 3dlabs, Matrox) should write their own compilers for HLSL to optimize output for their platform.

Ideally, DX9 would have a "Compiler driver" interface where you could install the compiler for each vendor, and everything would happen at runtime through one API.

Regardless, what NVidia has done is produce the first third party HLSL compiler. If they donate the compiler front-end parser as open source, even better for the other vendors, since all they need to do is concentrate on producing backend generators.

I wish ATI had done this first, so I could see the hypocrisy flow from the fanboys mouthes.
 
MDolenc said:
Let's say Cg can "predict" that it will be run on NVidia hardware, but how can HLSL in DX9 know what it will run on? Look into the register to see what card is instaled (since everything that "compiles" is in D3DX anyway)? If you look it this way then Cg will be optimised for NVidia, DX9 HLSL will be unoptimised and if ATI will throw somthing out then it will work best on ATI cards.

That could be achieved through runtime compilers. If DX releases ship with a 'generic' runtime compiler then as the game loads the DX assembly code can be compiled from the HLSL - this would be the 'catch all' for everyones hardware with no optimisations for anyone. The hardware vendors could then be free to optionally provide their own optimsied runtime compilers that will optimise the generated DX assembly calls for their own hardware specifications if need be.

Anything wrong with that?
 
The video driver is the final "assembler" of the shader assembly that would be output.

You'd assume that the IHVs would make it a smart, optimizing, instruction re-ordering, parallelizing assembler. Of course, this would be the 'last ditch' optimizer. You'd hope the compiler (whether it be Cg, MS HLSL, hand assembled, etc.) made good code to start of with.

It may be,however, that the current DX8.1 profile generates assembly code that is already matching the internal engine for the NVIDIA hardware (so the driver wouldn't need to do any work trying to optimize it), but on other platforms, it might not match the hardware so well so the driver is the last ditch optimizer.

Just as the same instructions on the AMD and Intel platforms have different performance characteristics and optimization means, so it is likely to be on pixel and vertex shaders.

And, for a final bit of anecdotal information, the facking DSP I work with on a daily basis has parallel operations, where you can add and move (for example) during the same instruction cycle. The parallel operations are just a pain to organize with lots of nitpicky rules, so most people don't write assembly that takes advantage of that. Luckily, our assembler will reorder instructions and 'parallelize' stuff because it knows all the rules to keep the pipelines happy.

This is what I expect the driver level vertex/pixel shader assemblers to do with the generic output of the Cg compiler.
 
[quote="DaveBaumann] The hardware vendors could then be free to optionally provide their own optimsied runtime compilers that will optimise the generated DX assembly calls for their own hardware specifications if need be.

Anything wrong with that?[/quote]

Nope, nothing. Except that there the interface doesn't existin D3DX's compiler API right now for vendors to plug into the backend, atleast from what I've read.

Again, in the meantime, if you want to use HLSL, but deploy to OpenGL, what's your alternative? Don't speak to me of OGL2.0 Shaders, we all know DX9 HLSL will be a mainstream product way before OpenGL2.0 is shipped and they have made no provisions in the OGL2.0 spec for "profiles" for older hardware. There will never be a game written for OpenGL2.0's shading language that runs on GF4 or Radeon 8500.


Again, here's the current dilemma:

1) You are building a game engine and need to target near term hardware that's in the marketplace *TODAY*

2) You don't want to write assembly code

What do you do? Use a third party tool or hand craft your own? Because NVidia is evil, of course, no developer should ever take anything from them. So where's the alternative tool for doing this? You cannot write a game using a vapourware future API folks.
 
Doomtrooper said:
ATI doesn't need to do that, they are doing it the proper way...through Direct X and Opengl...now all of a sudden Nvidia wants to break away from both...if you can't see the irony and the skepticism coming not just from 'FANBOYS' :rolleyes: ..(I know what camp you are in Democoder) but other members of the ARB.
Again if this is such a good thing why no John Carmack, why hasn't ATI, MATROX and 3DLABS made a press release saying that CG is the second coming...I can tell you why..they don't like it, need it or want it ;)

Point me to ANY of the major 3D players in the industry saying CG is good for them then maybe you will have a arguement !!

Good for whom? Good for ATI, Matrox, ImgTec, etc? Of course not. Good for developers, possibly yes. It's just another tool for them. Developers need not follow NVidia's call mindlessly. As someone else mentioned before, if it helps developers, then it will be used, if not, then dev teams will ignore it.

As to why Maxtrox, ATI, et. al haven't released a press release praising CG, can you point me to one where ATI praised NVidia for releasing the first TNL gpu? Surely that was a 'good thing'. Why didn't ATI praise it? Because ATI wasn't the first to the market with it, that's why. Standard PR for EVERY company is hype up what you have, downplay what you don't.
 
Point me to ANY of the major 3D players in the industry saying CG is good for them then maybe you will have a arguement !!

Well, you got Kurt Akeley praising it, and even though he is a part-time employee for NVIDIA, he is also the co-founder of SGI and OpeGL, which should say a lot right there. Then, you have these guys, the guys who are responsible for making Cg either fail or be successful. There's a lot of major 3D players in there, if you ask me.
 
Doomtrooper said:
My point is not with developers, my point is 'how easy' Democoder and Gking are trying to make CG sound for ATI and Matrox and 3Dlabs to implement yet again for the fith time there has been no press release stating so, in fact there has been a Rebuttal :rolleyes:

I stand by my opinion that when Opengl 2.0 and Dx9 HLSL is released CG will be just a memory.

Oh well, then in that case... you won't find any major "competitors" hyping Cg, for obvious reasons. Cg is all about the developers though, which I think is the most important part. I do see what you're getting at, if developers adopt it, and NVIDIA benefits solely from it, it may be anti-competitive in the long run, which couldn't be a good thing.

Would the developers be so excited about Cg if it was just going to be a memory when OGL 2.0 and DX9 comes out? You'd think they wouldn't be so excited and saying the things they are saying if Cg was going to make a quick exit. I guess you could always counter that NVIDIA paid them all to say positive things. :p
 
my point is 'how easy' Democoder and Gking are trying to make CG sound for ATI and Matrox and 3Dlabs to implement yet again for the fith time there has been no press release stating so, in fact there has been a Rebuttal

Seeing as how the front-end is going to be closed until SIGGraph, you're not likely to see many legitimate arguments one way or the other until then. It's not as if 3DLabs doesn't have an agenda of its own, and the Codeplay comments were largely irrelevant, as they were almost exclusively complaints about the NV20 profiles in Cg.
 
LOL, I know what you meant. Just had to take a quick cheap shot. ;) Sorry, I ain't betting, I'm not feeling that confident, I just think all those developers are praising Cg for a reason, and their words rate higher on the list than some anonymous person on the web stating his opinion, because Cg = for developers.
 
Back
Top