'CineFX' Conference Call

That is what I thought, the first Cg coded program won't work on hardware other then Nvidia GF3/4. Obviously it is not the industry standard but a Nvidia standard as it sits now. I doubt it will progress beyond Nvidia, mainly due to Nvidia themselves.
I haven't had a look at SpeedTree but is this because of missing vertex extensions on the 8500/8500 drivers or because it is what you said above?

The SpeedTree site does say that "uses a number of relatively new extensions in order to render different special effects. v1.0 of this demo requires that those extensions be present."

So, missing extension on a 8500 or because of Cg?
 
its more than one missing extension...


speedtree.jpg


so definitely nVidia specific at this time....
 
jandar said:
its more than one missing extension...


speedtree.jpg


so definitely nVidia specific at this time....

Those extensions are used by many OpenGL apps, not just Cg. More than likely, if they weren't using Cg, but plain old vertex shaders, you'd get the same error. It's their fault for not coding a fault back and requiring them. NV_Fence for example, has nothing to do with vertex or pixel shaders, but is just a synchronization call to help the GPU/CPU better many vertex buffer writes and reads.

Moreover, the Cg compiler (if you look at the source) is still far from complete and no doubt, it will be some time before it is debugged on all chipsets.
 
Speedtree demo doesnt work with my computer :-?
The OpenGL simply stop working and a Windows report is generated to M$.

GF3ti200 Asus 29.80 drivers/WinXP/P3-S/256MB
Any idea? thanks
 
You might want to use the official drivers 29.42 which I use in W2K and it works flawless.

Speedtree doesn't have multiple paths for other hardware, it is Nvidia or the highway as it stands.

Has Nvidia gone to ATI, Matrox, SIS, S3 and who ever else and say here is all the code, it is optimized for us but please take it for free and do what you must to make it work good for your hardware? Lets get together or send you optimize code to us and we will publish everything in one package while you can update the language as necessary to support your hardware with an optimize backend compiler.

Legally no one can do anything with Cg to modify its code until it is licensed from Nvidia. Now is that a industry standard? Only a self proclaimed industry standard at best.
 
Anyone can take the Cg front end and implement their own back end. Building an optimizing compiler backend is a very sophisticated undertaking and no vendor is going to just give it away. The backend is the "crown jewels" as far as IP is concerned.

NVidia will likely release a generic backend that generates DX8/DX9 shader code, but performs only trivial peephole optimizations. (e.g. no dataflow, etc)
 
Anyone can take the Cg front end and implement their own back end.

Why would they? and why would lets say 3dLabs do that and be stuck with a HLSL controlled by their competitor? Microsoft will have something similar why not just go with someone more neutral who's best interest is to support the whole industry?

If it is so easy why doesn't Nvidia release their backend to the industry? Sounds like it isn't really that easy to optimize code for your hardware overnight. Cg will die in the industry as fast as its proclaimers lifted it up.
 
I will sum up this 5 page thread:

"Any vendor could..."

"why should they?"

Repeat ad nauseum.

Now, could we stop repeating, at least until more information comes out?

[edit typo so I don't look like idiot]
 
noko said:
Anyone can take the Cg front end and implement their own back end.

Why would they? and why would lets say 3dLabs do that and be stuck with a HLSL controlled by their competitor? Microsoft will have something similar why not just go with someone more neutral who's best interest is to support the whole industry?

If it is so easy why doesn't Nvidia release their backend to the industry? Sounds like it isn't really that easy to optimize code for your hardware overnight. Cg will die in the industry as fast as its proclaimers lifted it up.

Let's see. ATI and 3DLabs must implement a compiler for DirectX9 HLSL. That means they need to implement a lexical scanner, a grammar, an abstract syntax tree, semantic analysis, symbol table, perhaps a translator to intermediate representation, build control flow graphs, dominator trees, register coloring/allocation, use-def/def-use chain structures, dataflow equations, ad nauseum.

It just so happens that NVidia released everything up to the translator to IR free to the industry as open-source. They are not really Cg specific (except for lexer/grammar stage) and can be reused by anyone, free. So why not use them?

ATI and 3DLabs will have to do this ANYWAY for DX9 HLSL and they will have to implement a backend/optimizer for DX9 HLSL.

They have two choices:

1) write an entire compiler from scratch (like writing an ICD from scratch)
2) reuse open source compiler frontends and middleends (Cg or GNU C RTL)


You guys just don't get it. Cg is just NVidia's trademark on the DX9 HLSL language. I have seen the latest DX9 HLSL beta grammar, and it is essentially identical to Cg. Cg is not going to die. Rather, Cg will become an open-source implementation of DirectX9 HLSL the same way GNU C is an open source implementation of C. This situation is no different than MS calling Java "J++" and adding a few pragmas. "Cg" will live on as meaning "NVidia's third party compiler, toolset, extensions, and runtime library", but essentially the language will be DX9 HLSL and (if the stars are in alignment), OpenGl2.0 HLSL as well.


If ATI wants to rewrite the lexer/parser/ast/et al, so be it, but there is a compelling reason to use open source stuff when it is available and works (e.g. witness GNU C's RTL architecture being used for every language under the sun). Because Cg is open source, the compiler front end is going to be integrated into alot of IDEs.

NVidia has delivered on every thing they promised they would do with Cg (release source, propose as standard, support DX8 and OGL ARB extensions). But the one thing you guys won't accept (and which NVidia claims) is that Cg is 100% compatible with DX9 HLSL.

When the final DX9 specs become public, and you see how wrong you have been, I hope there's going to be some serious mea culpa's.
 
ATI and 3DLabs will have to do this ANYWAY for DX9 HLSL and they will have to implement a backend/optimizer for DX9 HLSL.

I was under the impression that MS would do the DX9 HLSL Compiler since it would just be compiling generic DX9 Assembly. IIRC I already mentioned what you are saying here and the idea was there in principal but I don't think that was how it was going to work.
 
DaveBaumann said:
ATI and 3DLabs will have to do this ANYWAY for DX9 HLSL and they will have to implement a backend/optimizer for DX9 HLSL.

I was under the impression that MS would do the DX9 HLSL Compiler since it would just be compiling generic DX9 Assembly. IIRC I already mentioned what you are saying here and the idea was there in principal but I don't think that was how it was going to work.


MS will provide generic DX9 assembly output, but that won't get your optimized performance on all platforms. C compilers can do "generic 80x86" output, but with widely varying performance hits on different architectures (486, 586, P3, P4, AMD, etc) Modern compilers are sensitive to # pipelines, cache architecture, scheduling constraints (e.g. CPU can't do a DIV and MUL at same time, but can do MUL and ADD, so rearrange instructions to maximize ILP), branch prediction, etc.

Same goes for GPUs. Different vertex shaders will execute with different performance characteristics. For example, it may be that you can do a texture instruction in parallel with a color op, or that a RCP or SIN/COS function operates more efficiently on some platforms than others. Perhaps RCP eats up extra resources and you need to insert scalar ops in any open slots in the schedule.

A given vertex shader can be optimized somewhat by the device driver, but there are limitations, since the shader is not a good intermediate representation of the original semantics.

If ATI or 3DLabs really wanted the most optimal output, they'd take the HLSL and compile it themselves, performance high level optimizations for the known characteristics of their hardware.


My position has always been:

1) There will be a "catch all" generic compiler for all hardware. (both Cg and MS D3DX will provide this)
2) "Catch all" won't provide best performance
3) Vendors wishing to eek out better performance are going to implement their own HLSL compiler
4) NVidia has done so for NV30 (to the disdain of the ati *boy consortium)
5) If ATI/3DLabs/et al want top performance, they should do so as well
6) Reinventing the wheel is a bad thing. That's why "unified drivers" exist, because alot of the significant work in writing an OpenGL ICD is reusable. Same goes from a compiler.
7) There exist open-source compilers these companies can use to start themselves off (GNU's arch, NVidia's Cg, and many more)


As GPU's become general purpose computation devices, the SAME trends we saw with CPU's and compilers will be applied to GPUs. Just like the first CPUs, we started with fixed function devices (eniac, collosus, etc), they graduated to assembly language (punch cards), and now we are getting to our first high level languages for CPUs. Over the next decade, you will see many many HLSLs for GPUs, not just one. And you will see many many compilers and IDEs.

Those's hoping for a single programming language for the GPU for going to be in for a rude awakening.
 
If ATI or 3DLabs really wanted the most optimal output, they'd take the HLSL and compile it themselves, performance high level optimizations for the known characteristics of their hardware.

I'm sure they will, if they can. The question is, will MS allow it?
 
DemoCoder said:
You guys just don't get it. Cg is just NVidia's trademark on the DX9 HLSL language. I have seen the latest DX9 HLSL beta grammar, and it is essentially identical to Cg. Cg is not going to die. Rather, Cg will become an open-source implementation of DirectX9 HLSL the same way GNU C is an open source implementation of C. This situation is no different than MS calling Java "J++" and adding a few pragmas. "Cg" will live on as meaning "NVidia's third party compiler, toolset, extensions, and runtime library", but essentially the language will be DX9 HLSL and (if the stars are in alignment), OpenGl2.0 HLSL as well.

I'd like to highlight this, and point to past threads that this echoes. All my arguments are there, no need to repeat them, I just refer to them to save rehashing the same things over again. As Russ says, we are waiting on more info and clarification and we'll just be rehashing everything over again at this rate.
 
Call me crazy but A PLUGIN that works with DirectX 9 HLSL will already do these optimizations...without writing another HLSL...this has been talked before to death but I find it amusing that no one even acknowledges that ATI has 'covered its bases' so to speak.

Right Hemisphere, an ATI strategic partner aided with the design of RenderMonkey ..if you see what they do there then you can see Rendermonkey will do these optimizations without introducing another HLSL

http://www.righthemisphere.com/
 
Doomtrooper said:
SO where does Rendermonkey fit into this scenario coder, or did you just overlook that little tidbit as if it doesn't exist.

RenderMonkey, as far as I know, is not a compiler but a code generator. RenderMonkey generates HLSL and/or glues together pre-fab shader code, but doesn't compile a procedural language to assembly per se as is the case of most languages.

IIRC, RenderMonkey relies on invoking external tools for compilation.
 
Democoder

I will not say Mea Culpa because I see no guilt in being conservative and wait to see. I have my head on the clouds but my foots are on the ground. Like an old mouse I am very suspicious about any free cheese. Is the cat around?

Also what about the single license noko posted?

It is nice to see that you are enthusiastic about Cg and want to work with it (if you are not already working), but we really need standards for many real world reasons (competitivity, productivity, etc...). At the same time proprietary has value too.

One of the most standized thing I know is the Internet and has a real value for the people because of the high volumes (the exponential net effect). Productivity is key and good standards help productivity as well as specific products help relief some localized pressure points. You can use OSPF in the internet and EIGRP in the intranet. Do you see the point?

Regards
Pascal (tired and going to sleep) :)
 
Did you guys even try to download the code? It comes with its own license, the legal info on the website is NOT clickthrough. There is no implicit nor explicit agreement needed to download the compiler. It only clarifies your rights for material without a license, which indeed basically comes down to none.

The important part of the real license :
In consideration of your agreement to abide by the following terms, and subject to these terms, NVIDIA grants you a personal, non-exclusive license, under NVIDIA's copyrights in this original NVIDIA software (the "NVIDIA Software"), to use, reproduce, modify and redistribute the NVIDIA Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the NVIDIA Software, you must retain the copyright notice of NVIDIA, this notice and the following text and disclaimers in all such redistributions of the NVIDIA Software. Neither the name, trademarks, service marks nor logos of NVIDIA Corporation may be used to endorse or promote products derived from the NVIDIA Software without specific prior written permission from NVIDIA. Except as expressly stated in this notice, no other rights or licenses express or implied, are granted by NVIDIA herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the NVIDIA Software may be incorporated. No hardware is licensed hereunder.
 
Back
Top