Siggraph Rendermonkey Details and more....

Status
Not open for further replies.
Wait a second here, Rendermonkey will do exactly what CG is doing..forget the tech talk here for a second.
The idea behind CG is to allow easy coding for shaders, Rendermonkey does the exact same THING with plugins on existing rendering software...there is no need for CG at all.

Now I see people wanting to replace OGL 2.0 HLSL with CG ...I thought CG was just a tool folks, now we see the truth :rolleyes:

CG may export code but it will export optimized code for Nvidia hardware since Nvidia has taken matters into their own hands and went above DX 9 standards. I am totally shocked people are falling for this load of BS...
whats the purpose of having a damn DX9 standard if one IHV can basically thumb their nose to it, start their own HLSL and build hardware to exposes it features above the rest of players that ARE following the standard...
My I can't believe the double standard here..now we have people wanting CG to take over OGL 2.0 HLSL with Nvidia optimizations and then on top of it condone IHV's to start their own standards ignoring DX9 specs...Now I've seen it all :rolleyes:
 
Toilet paper and exlax both make it easier for me to sit on my throne and contemplate.

That doesn't mena they do the exact same thing, or are equally useful.

If you'd get off your offended high horse, and let people discuss the technical merits of rendermonkey vs. Cg without all this hand wringing and exasperated name calling, maybe we'd all get a little education as to what the future is for.

By the way, my understanding of rendermonkey is not that its a language (per se), and not an IDE, but that it is attempting to be the GCC of render languages.

And, when I say GCC, I don't mean GNU C Compiler. GCC is now much more than that. It is a compiler that doesn't have a 'language' for it. It has a configurable front end, and back end. The front end defines the language constructs and transforms the code into an internal tree/flow diagram. The back end transforms that internal code tree/flow diagram into ASM. It accepts JAVA, C, embedded C, C++, fortran, etc, and transforms it to assembly code for over 20 different targets.

Render monkey does not perform the same task as Cg (define a high level shader language); Cg does not perform the same task as Rendermonkey (provide a tool/architecture to transform multiple high level shader languages to different targets).

However, it does seem that there is some overlap.
 
Cg is about platform independence, its also a C level language which allows hardware restrictions to be explicitly exposed to the developer (I find calling it a high level language slightly misleading, its a lot lower level than most of its kin). Wether you think thats a good idea or not, there is really no other shading language which provides a 1:1 alternative. Rendermonkey doesnt solve all the problems Cg seeks to solve, and vice versa.
 
MfA said:
Cg is about platform independence, its also a C level language which allows hardware restrictions to be explicitly exposed to the developer (I find calling it a high level language slightly misleading, its a lot lower level than most of its kin). Wether you think thats a good idea or not, there is really no other shading language which provides a 1:1 alternative. Rendermonkey doesnt solve all the problems Cg seeks to solve, and vice versa.

CG is not about platform independence, its exposing features 928 actually above the Direct X 9 SPEC.
CG is not 100% open source front end is from the links and a small portion of the back end. Nvidia still controls the source and if anyone here is Naieve to think there isn't something in it for Nvidia (Nv30 is just the start of it)...then the earth is flat :LOL:
 
Doomtrooper said:
Wait a second here, Rendermonkey will do exactly what CG is doing..forget the tech talk here for a second.
The idea behind CG is to allow easy coding for shaders, Rendermonkey does the exact same THING with plugins on existing rendering software...there is no need for CG at all.

Now I see people wanting to replace OGL 2.0 HLSL with CG ...I thought CG was just a tool folks, now we see the truth :rolleyes:

CG may export code but it will export optimized code for Nvidia hardware since Nvidia has taken matters into their own hands and went above DX 9 standards. I am totally shocked people are falling for this load of BS...
whats the purpose of having a damn DX9 standard if one IHV can basically thumb their nose to it, start their own HLSL and build hardware to exposes it features above the rest of players that ARE following the standard...
My I can't believe the double standard here..now we have people wanting CG to take over OGL 2.0 HLSL with Nvidia optimizations and then on top of it condone IHV's to start their own standards ignoring DX9 specs...Now I've seen it all :rolleyes:

Seems like a case of paranoia...

1. Cg ist first and foremost a shading language (and NVidia is the first to provide tools to use this language). RenderMonkey is a language-independent tool for efficient shader programming. That's nowhere near "the exact same THING".
RenderMonkey needs a language for shader representation to be useful. Assembly shaders aren't exacty cool, so developers should chose whatever language fits their need best. Depending on their preferences, this might be Cg (if anyone cares to write a Cg plugin for RenderMonkey).

2. The NVidia Cg compiler may well output code optimized for NVidia cards. If any company decides to make their own Cg compiler, they can easily optimize for their own architecture. Remember, NVidia will open-source the compiler front-end (and a simple back-end).
And: Cg does NOT bypass DX or OpenGL! It is simply not true that NVidia uses Cg to expose hardware features that could not be accessed in another way.

I don't know which of the upcoming shading languages is "better", but I have the patience to delay my judgement until there have been real experiences and won't cry out "Cg is bad" or "Cg rulez" based on some whitepapers.
One thing's for sure: assembly shaders are becoming obsolete.
 
The idea behind CG is to allow easy coding for shaders, Rendermonkey does the exact same THING with plugins on existing rendering software...there is no need for CG at all.

When I see a good Renderman SL->DX9 PS converter for any but the most trivial SL shaders (that does as good a job at combining light, transformation, atmosphere, & surface shaders as I can do manually), I'll be impressed. It's not an easy task, and it probably won't be done particularly well for quite some time. Renderman SL was designed for very different hardware than modern GPUs, and there have been plenty of reasons given why it doesn't map particularly well to real-time hardware applications.

Then there's the whole issue of Cg allowing real-time previewing in Maya and MAX, integrating with the workflow better than RenderMonkey will (or, for that matter, Maya SL/Renderman SL).

I like Russ' GCC analogy for the RenderMonkey compiler.

CG may export code but it will export optimized code for Nvidia hardware since Nvidia has taken matters into their own hands and went above DX 9 standards.

You've defended PS1.4 in the past. I fail to see how ATI "ignoring" DX8 specs and going above standards is any different... at all.

Here's how the actual DX specs are developed:
Hardware vendors research advanced techniques and architectures
Hardware vendors decide what to pursue for next-gen hardware
Hardware vendors meet with Microsoft to debate the new DX spec
Microsoft chooses "best" lowest common denominator for DX
Hardware vendors may or may not add features to their hardware to meet this spec

In this case, the "best" lowest common denominator was decided to be R300. NVIDIA is not going to throw away the millions of man-hours of work invested into NV30 design in order to only follow what Microsoft is allowing. Instead, they open up advanced features in OpenGL.

That's the way it has always worked, and that's the way it's going to continue to work. In this case, a full HLSL and toolset (allowing real-time and rendered views of the HLSL shaders in popular modeling packages) are provided, since a good alternative didn't exist. Writing 1000-line shaders in assembly is an exercise in pain.

Now, here's the better question -- what would you be saying if Microsoft decided that NV30 was a better baseline for DX9? ATI obviously wouldn't have had time to revamp their pipeline to support everything in NV30 and still release this year (or possibly even next year).
 
Xmas you can call me skeptical vs paranoid...I work for a large corporation and know how it works. There is always fine print, there is lots to gain for Nvidia here controlling the source ;) .
Yes Rendermonkey does require a Shader language and ATI has bee working with Microsoft on DX9 HLSL and OGL 2.0 HLSL, so there is now two languages waiting (OGL of course is being blocked by Nvidia and MS in progressing and looking at the latest ARB notes it looks like Quarter 3 2003 before OGL 2.0 would be ready :rolleyes: )
 
Xmas said:
And: Cg does NOT bypass DX or OpenGL! It is simply not true that NVidia uses Cg to expose hardware features that could not be accessed in another way.

Cg 2.0 will directly compile to NVIDIA proprietary NV30-extensions, functionality that probably won't be available through DX9 and not available for other ISVs through ARB extensions as NVIDIAs Cg alliance partner Microsoft is blocking OpenGL right now.

So yes, you're right, a NVIDIA tech demo coder can access the features in another way through the extension. But that way, you still bypass "OpenGL", as you only use the proprietary part of OpenGL, beeing the proprietary OpenGL extensions of a certain vendor.

So for the next few months, Cg is only API independent for NVIDIA hardware. For the future, it may only be API independent for a small subset of the features, beeing the ones available in the ARB extensions. The API indepencence isn't realy an advantage under these circumstandes, making Cg basicly completly obsolete.
 

You've defended PS1.4 in the past. I fail to see how ATI "ignoring" DX8 specs and going above standards is any different... at all.
[/quote]

I defend PS 1.4 as this is a STANDARD <------ show me Nvidias STANDARD...what baseline does other companies have now when one can simply thumb their nose at the current PS 2.0 Standard with 96 instructions.
There is difference here...If Nvidia wanted to put PS 1.4 into the Geforce 4 it could have easily.. the information is readily available.
If ATI wants to put 1024 instructions on its next part i.e R350 how can ATI do so ?? What standard is there...the New Nvidia controlled API ??
Its getting out of hand here, there is no rules anymore
 
Mephisto said:
Xmas said:
And: Cg does NOT bypass DX or OpenGL! It is simply not true that NVidia uses Cg to expose hardware features that could not be accessed in another way.

Cg 2.0 will directly compile to NVIDIA proprietary NV30-extensions, functionality that probably won't be available through DX9 and not available for other ISVs through ARB extensions as NVIDIAs Cg alliance partner Microsoft is blocking OpenGL right now.
Still, what I wrote is true. That's what extensions have always been about, exposing features that are not in the OpenGL base specs (which certainly is not a bad thing, although it has some drawbacks). With DX it's different.
It's not like Cg opens up some backdoor for NVidia while keeping out the whole competition.
 
I defend PS 1.4 as this is a STANDARD

So your definition of STANDARD is anything that Microsoft has approved for exposing hardware features in DirectX. That's a pretty weak definition.

PS1.4 is no more a standard than NVIDIA's OpenGL extensions -- it's a hardware specific path implemented by only one company. In this case, NV_register_combiners2 is a more legitimate standard, since the extension is also supported by 3DLabs (unlike PS1.4, which is only implemented by ATI).

Besides -- R200's fragment shader was finished long before Microsoft decided that they should add enhanced support for it into DirectX. You're just ignoring history if you believe that DX9.1, with enhanced NV30 support, isn't a possibility.

And you continue to ignore the fact that the hardware design was FINISHED long before the DirectX spec.

If ATI wants to put 1024 instructions on its next part i.e R350 how can ATI do so
ATI_fragment_program. The same way NVIDIA is providing access to all the features of NV30.
 
Weak hardly..you show me the technical docs on Microsofts webpage how to implement Nvidias enhanced instruction set (and whats it called DX 9.1, 9.5 Nv9.1) and you have a arguement. Whats the purpose of having a standard if no one abides to it, whats the purpose...
The PURPOSE of DX is to have a LEVEL playing field for all hardware manufacturers...so a title can be developed on DX and runs approximately the same on all cards.
OpenGl also stood for that at one time, but of course we know what happened there. o_O
 
"So your definition of STANDARD is anything that Microsoft has approved for exposing hardware features in DirectX. That's a pretty weak definition.'

Huh? Wouldn't the company defining the features to be supported within its own API be considered the standard by which hardware manufacturers should target their hardware?

MS say that PS1.4 is a standard feature of DirectX, then yes, I would say a hardware manufacturer producing a chips that supports these features as offering an standards-based hardware implimentation.

Maybe Nvidia's NV30 archtecture does support post DX9.0 standard, we simply do not know that just yet.
 
Status
Not open for further replies.
Back
Top