Siggraph Rendermonkey Details and more....

Status
Not open for further replies.
Doom, there's a big difference between being able and wanting to do something.

btw, a qoute from www.cgshaders.org
Have other vendors expressed interest in writing compiler profiles?

We are aware of several projects currently underway to create additional compiler backends.


When will the source code to the 'open' portions of Cg be available, and what exactly are those portions?

NVIDIA will be releasing source code to the "front-end" of the compiler, and a simple back-end. This code will contain the parser and basic non-back-end-specific parts of the compiler, and the back-end we create will walk through the parsed program and print out some human-readable output. We will release it as soon as the code is ready, there have been some tweaks to the grammar to be more compatible with Microsoft's HLSL that we wanted to get in before releasing the source code.

And I don't see RenderMonkey and Cg as competing things.
 
Doomtrooper said:
Mephisto if its democratic how can MS and Nvidia block Opengl ARB extensions without majority votes ??

Because the majority of voting ARB members voted against the ratification of some ARB extensions because they feared patent lawsuits from MS.
 
Xmas it is still not 100% open source, not to mention Nvidia NEEDS CG to expose the NV30...DX9 won't.

I'm sorry I just don't think CG is needed, it would have been MUCH less confusing if Nvidia went the way of the plugin, concentrated on DX9 and Opengl 2.0 and did something similar to what ATI did...I think ATI's approach is the correct one. :-?
 
Mephisto said:
Doomtrooper said:
Mephisto if its democratic how can MS and Nvidia block Opengl ARB extensions without majority votes ??

Because the majority of voting ARB members voted against the ratification of some ARB extensions because they feared patent lawsuits from MS.

Ahhh ok.. :rolleyes:
 
but would NVIDIA allow him to extend the Cg language speficication to match his profile, e.g. to allow texture sampling in the vertex shader state or to introduce new data types

Texture sampling in the vertex shader state is a feature of the profile, not part of the language specification.

If a vendor created hardware with writable scratch memory, there are a number of ways it could be implemented without changing the language spec.

What new data types would be added?
Integers? Possibly -- there isn't a strongly compelling reason too, though.

ASCII data? Probably not.

Non-constant samplers? Probably not. Would introduce way too many data hazards to be hardware-efficient

Pointers? Not hugely useful at the surface shading level, especially with limited (linear) constant memory. Arrays are already part of the language spec, and with data-dependent indexin
 
MfA said:
Copyrights apply to the implementation, not the interface. If you make your own Cg derivative/compiler/profile/whatever without infringing on patents IP laws do not apply as long as you dont actually call it Cg (which I assume is a trademark).

I don't agree. Microsoft wasn't allowed to extend JAVA, so why should e.g. Matrox be allowed to extend Cg to expose special features of their hardware?
 
Thats contradicts the .pdf.

And what has been explained to me

Not really. The SIGGRAPH RenderMonkey PDF pretty much directly says that RenderMonkey is an IDE that can support any language via a plug-in architecture.

RenderMonkey isn't a lanugage that plugs-in to Maya, PRman, or DX9, it is a tool that can accept Maya SL, Renderman SL, or DX9 HLSL if the plug-in exists. If plug-ins existed, I'd have to imagine that at least one slide would demonstrate something other than pixel shader assembly code.
 
Mephisto said:
Well, of course somebody "could" define a profile, but would NVIDIA allow him to extend the Cg language speficication to match his profile, e.g. to allow texture sampling in the vertex shader state or to introduce new data types? I highly doubt this.
Please download the Cg language documentation and actually read it. In fact, both texture sampling in the vertex shader stage and new data types are possible.
 
First of all we now have 3 different high level languages (forget about RenderMan,... for now): DX9 HLSL, OpenGL 2.0 HLSL, Cg.
DX9 HLSL can only be used in DX and OpenGL 2.0 HLSL can only be used in OpenGL. Cg can be used both in OpenGL and DX (good thing).
RenderMonkey is not really competition to Cg. RenderMonkey is just an IDE and it eats many high level languages (and if it can eat DX9 HLSL it can also eat Cg) and outputs DX assembly code (is OpenGL included?).

One of those really great things about Cg is that it can be used on runtime (you can only use RenderMonkey at compile time). Now if you have a game that will support both OpenGL and Direct3D you need to:
learn DX9 HLSL and OpenGL 2.0 HLSL and write shaders for both APIs
or use RenderMonkey, use DX9 HLSL (for example) and compile for both APIs (separate code for OpenGL and separate code D3D)
or simply learn Cg and ship the game with shaders directly in HLSL format. Using Cg as runtime compiler is really powerful (more powerful then using DX9 HLSL). You could for example write a Cg shader aimed at pixel shaders 2.0 and when there would be pixel shader 3.0 out Cg compiler would be able to use it and optimise for it (if ps.3.0 backend is available on machine). You could do this with DX9 HLSL only if would recompile complete game engine (or whole game in worst case).

Cg is really powerful. It's good thing that ATI does not need to do anything to be supported by Cg. Cg will be able to compile for all DX pixel (and they will probably include support for ps.1.4) and vertex shaders. Even in OpenGL we will be able to run Cg on ATI cards as soon as there are standard "vertex shader" and "pixel shader" extensions (and ATI will "have to" support them). The only way to get rid of Cg is to make DX 9 HLSL and OpenGl 2.0 HLSL 100% compatible :LOL:.
Cg is a language, RenderMonkey is and IDE. RenderMonkey can compete with Cg plugins for 3D Studio MAX, Maya,... but not with Cg itself (can Visual C++ 6.0 compete with C/C++? No!).

End of story!
 
Hmm, This previously posted quote...

Cg" discussion
NVIDIA wanted to discuss their goals with Cg (although they are not offering Cg to the ARB)

Principal goals are to enable apps to use all the features of NVIDIA hardware, do rapid prototyping, and make efficient use of the underlying architecture. "Profiles" allow compiler to control what it accepts as valid programs.

Goal is that there not need to be multiple, very similar shading languages for different underlying APIs and platforms. Thus Cg is syntactially and semantically compatible with HLSL in DX9; may not be true forever, but it is today. This is a constraint on evolving the language, had to get feedback from developers and work with Microsoft.

Bimal asked about NVIDIA's interest in incorporating a high-level language directly into OpenGL, ala 3Dlabs' design. Nick says what's important is solving developers' problems, and they want a single shading language. If the market wants an integrated language, NVIDIA will support that too - but they don't think that's what developers want, and they're less excited about it. What's critical to making Cg work is having the right low-level interfaces to the underlying API, and that's where they want to concentrate their efforts.

Discussion followed on the relative effort the ARB should put into low and high level language working groups. ARB_vertex_program now represents nearly 2 year old technology; low level interfaces need to look at branching, looping, etc. constructs.

Cg backend receives a partially optimized DAG representation of the shader. Must figure out how to map onto what the backend supports.

Compatibility with DX is very important, but they're willing to weigh advantages of changes vs. cost. Cg and the proposed "OpenGL 2.0" language are similar but not identical; both look like C, and both target multiple underlying execution units. Interfaces, communication between nodes, and accessing state across nodes are different. It's very late, but not too late to contemplate merging the two languages.

Should answer any accusations that those of us against Cg, are nothing but mindless fanboys. The issue here is the intent of the language is to favor Nvidia hardware.. and as they clearly indicate, their cooperation with everyone else can end... any time they choose. Likely if it does not play out to their advantage..

This is why a language like Cg, needs to be developed by a neutral party who genuinly DOES care about the entire industry, not lining their own coffers.. Which was the intent of the ARB and DX in teh first place..

Anyone who cant see that... is well.. in need on new glasses..
 
MDolenc said:
DX9 HLSL can only be used in DX and OpenGL 2.0 HLSL can only be used in OpenGL. Cg can be used both in OpenGL and DX (good thing).

3D Labs asked the ARB to write compilers which can compile OGL HLSL shaders to DX8/9 assembler.
 
MDolenc said:
The only way to get rid of Cg is to make DX 9 HLSL and OpenGl 2.0 HLSL 100% compatible :LOL:.

And thats were it all kinda falls apart really, as I was aluding to in my previous post.

If Cg is offered to the ARB to be adopted as the OGL2 HLSL standard can you see the rest of the ARB adopting it without the ARB having control over its specification? And if that occurs can you then see it remaining 100% compatible with DX? I'm not sure I see that happening and you end up at square one again (although with languages that are similar).
 
Let's try a different approach....

WHY would nVidia create their own HLSL that is targeted to be "compatible" with Microsoft's HLSL for DX9? What purpose would that serve?

There are only 2 possibilities IMO.

1) To get it out sooner that DX9 HLSL.
2) To expose NV30 functionality that DX9 HLSL won't do.

Option one does not seem the primary motivator IMO. If that were the case, and nVidia was working "with" Microsoft as they claim...why not just release the DX9 HLSL specification early, with nVidia's CG complier and other tools as the Cg suite?

Option 2 is the pretty obvious motivator.

Now, there's nothing wrong with nVidia doing this, any more than there is anything wrong with IHVs writing proprietary extensions to GL to expose functionality.

But that doesn't make the Cg language any more potentially useful that nVidia specific extensions. (That is, useful to target nVidia hardware specifically, but not any more useful than DX9 HLSL for other hardware.) And to expect or believe other companies will support the Cg language over or in addition to other HLSLs given this, doesn't seem likely.
 
I just wanted to add my thoughts as to why I believe that Cg is of benefit to developers. It largely has to do with wide adoption from tool ISV's.

What is desperately needed is a way to work with shaders within a tool like Maya (or 3DS or softimage) and be able to export the shader along with the attribute data to a game (model) format.

For this to really work the shader language has to be pretty much platform agnostic i.e. doesn't care if the tool is written in OpenGL or DX9 or whatever and doesn't care what API the game is targeted at.

The core Cg specification is extremely open ended and general and probably won't require significant modification in the next few years. I like the idea of the profiles to target specific sub groups of functionality.

I also believe that if the language is to grow the language specification has to be controlled by a single entity (although personally I don't care who) preferably one that is looking to create a usable language rather than one that is trying to push it's own agender.
In the near future we'll reach the point where any new GPU will be able to run any shader, so the only real issue should be the syntax and semantics of the language and while the language is maturing a single entity needs to hold the "vision" of what that is.
 
ReRead MdolenC's post Joe, it catches the point succintly IMO.

'Why would you want to use CG?'

B/c it goes to both DX9 AND OGL2.0 in HLSL mode.

Assume for arguments sake that CG is much more powerful than DX9 HLSL and 3dlabs HLSL.

If the language is that much better, one could write 5 lines of code, instead of 20 for DX, and 7 for OGL 2.. Or 5 to 27. With the ability to recompile the game on one standard, if the developer wants future additions and the backend is available.

For developers the only drawback, is that presumably (unproven) CG won't have full optimization paths for other IHVs relative to Microsoft and OGL's offering.

The acceptability of CG thus lies in my opinion, on the first premise, and whether or not its really that much better.
 
B/c it goes to both DX9 AND OGL2.0 in HLSL mode.

That's basically a compiler option. Anyone should be able to write a compiler that takes DX9 HLSL, and creates OGL 2.0 code from it as well. That's not an advantage of the Cg language, it's an advantage of nVidia's compiler.

Assume for arguments sake that CG is much more powerful than DX9 HLSL and 3dlabs HLSL...The acceptability of CG thus lies in my opinion, on the first premise, and whether or not its really that much better..

Since Cg is supposed to be fully compatible with DX9 HLSL, I don't see how it could really be that much more powerful. Though I do agree that if the language itself is so much more robust than any of the "standards", that would be a reason why developers could adopt it....which could "force" IHVs to support it.
 
Mephisto said:
Xmas said:
... new data types are possible.
And how?
By using profiles that introduce those data types :rolleyes:


Another small excerpt from the Cg documentation:
"Some implementations may be unable to support some data types, especially in fragment-program profiles. If so, sub-setting of the types will be necessary, and additional types may be introduced. However, implementations must always support the cfloat and cint types. Fully conformant implementations must also support the float type. If float is not supported, implementations may have to modify the promotion rules for cfloat and cint, especially in function overloading."
 
Status
Not open for further replies.
Back
Top