NVIDIA Open Sources Cg

Using the DX9 HLSL as a cross platform language is the most logical approach, a good move I hope others would follow. If you concede its a good approach I dont see whats the big deal about borrowing NVIDIA's mechanism for determining how you specify the appropriate language subset and function library, its hardly overly restrictive ... and as long as you implement your own compiler (or if NVIDIA's license ends up more open than feared) you can always go your seperate way when necessary.
 
Well, now that nVidia has fully open-sourced all of the code for Cg under a nonrestrictive license, hopefully we will see support from other places.

For example, what I'd like to see is the merging of Cg and the HLSL's in DX and OpenGL. This is where Cg is potentially good. It could unify, in one aspect, at least, programming between DX and OpenGL.

But, if Cg ends up as nVidia-only, then I don't see much point in people using it. After all, we will have other languages that will work on different hardware.

In other words, I really feel that the future of Cg depends on what happens with DX9 and GL2.
 
MfA said:
But the ARB is too slow, and m$ doesnt allow extensions ... so whats a company which has hardware ahead of what DX exposes to do?

Why is the ARB too slow..read the Opengl notes and tell me which company constantly hampers progression, then blocks along with M$ ARB extensions....


I'm only stating facts here .
 
One good example:

For at six months to a year after the release of the GeForce3 and Radeon 8500 cards, there was no industry-standard pixel/vertex shader for either.
 
Boy this is sad. Not one of the Cg detractors can point out materially how Cg differs from DX9 HLSL. It's speculation in absense of information.

The only publically available info about the concrete difference is the introduction of "profiles" that NVidia added for backwards compatibility with DX8 and OpenGL fragment shaders. Microsoft might pickup these features and roll them back into DX9 HLSL proper, which means Cg would simply be NVidia's implementation of DX9 HLSL.

No one in this forum can point out any NV30 specific features in Cg that don't already exist in DX9 HLSL or will exist in the final version.


If Cg ends up being NVidia's trademark on their DX9 HLSL compiler and toolset, who the hell cares? As I explain ample times in the past, if I am a developer I have the following choices:

1) Using Visual Studio, I can write, by hand, vertex and pixel shaders for DX9
2) I can choose to use DX9 HLSL instead and use the tools provided by MS to do the compilation/generation step
3) I can choose not to use MS's tools, but prefer NVidia's, since they might be easier to use, generate more optimal code
4) I might want to use DX9 HLSL, but generate code for OpenGL. I use NVidia's tool.
5) I use RenderMonkey instead of MS's tool to compile DX9 HLSL
6) If I am targeting both NV30 and R300, then I will use Cg to generate optimal code from DX9 HLSL for the NV30 code path, and ATI's compiler to generate code for the R300 optimally. I will then use MS's "generic" compiler to handle everything else.


This issue is being blown way out of proportion. Even if I NVidia invented a completely different language, I might still want to use their tool, the same way that many developers avoid talking to DirectX directly and instead use higher level third party libraries like RenderWare.


Finally, RenderMonkey appears to have its own proprietary XML syntax for describing shaders and shader parameters, so would be aggressively going after them as well because they are not going through the ARB to standardize them?
 
Chalnoth said:
One good example:

For at six months to a year after the release of the GeForce3 and Radeon 8500 cards, there was no industry-standard pixel/vertex shader for either.

I wonder why when one major player is asking for licensing fees in OpenGL

OpenGL Programmable Shading
David Kirk reaffirmed NVIDIA's committment to extending OpenGL for the good of the whole community. They got off on a bad foot with some of the legal positioning on NV_vertex_program, and would now like to offer the extension to the ARB free of any encumbrances or conditions, in contrast to the previous position.

Bimal asked if the previous constraint on adopting the extension only without changes still applied. David says that as we move away from features to programmability, NVIDIA is focusing on the processor instruction set. They'd resist changes which made the extension incapable of running on their platform, but would be more receptive to other changes. Going forward, use will be made less restrictive; IP does underlie the extension, but

http://www.opengl.org/developers/about/arb/notes/meeting_note_2001-09-11.html
 
The only problem is, at least as far as I can see, RenderMonkey and Cg are fundamentally different. Rendermonkey uses the Renderman language, while Cg uses a HLSL based on C.

I don't know what DX uses, but I do know that the OpenGL HLSL is also based on C, and that it is not impossible for the two to converge before release (I just don't know whether or not it will happen).
 
Chalnoth said:
The only problem is, at least as far as I can see, RenderMonkey and Cg are fundamentally different. Rendermonkey uses the Renderman language, while Cg uses a HLSL based on C.

Rendermonkey uses by default DX9 HLSL (C-like), Renderman or OGL 2.0 HLSL in the future. It is plugin based and can work with ANY language.
 
Mephisto said:
Rendermonkey uses by default DX9 HLSL (C-like), Renderman or OGL 2.0 HLSL in the future. It is plugin based and can work with ANY language.

Well, then, I guess Rendermonkey couldn't be considered a language, but a development tool, similar to nVidia's NVEffectsBrowser, I imagine (though it does appear more capable).
 
Back
Top