Does Cg make sense?

CMKRNL

Newcomer
So far I've been keeping out of the Cg debate simply because I don't know enough about the language or its limitations to form an educated opinion on whether it's a good or bad thing for the industry. However, I have been giving some thought to where nVidia is perhaps hoping to position the NV30 (as a high end renderfarm accelerator) -- and how Cg fits into that environment.

I'm hoping to get some clarification from people here who are familiar with writing shaders in other HLSL's such as Renderman or Maya Shading Language. I really don't know much about these, but I know that some people here (DemoCoder?) have done extensive Renderman coding. My question is: How difficult/time consuming is it to convert existing Renderman or Maya programs to Cg?

Given the existing investment in Renderman shaders, why would a high end developer want to learn a new language and port thousands of lines of code to Cg? Does Cg offer benefits that Renderman or Maya SL's don't? Doesn't it make more sense for nVidia to be pushing their own version of a tool like RenderMonkey which will convert these existing established languages to something that will compile on their HW?

So from a high-end perspective, it doesn't make sense to me why you would want to go Cg. From the low-end, game developer perspective, Richard Huddy's comments (as well as nVidia's), make it sound very much like DX9 HLSL is either equivalent to Cg or is Cg itself. As such, why wouldn't vendors just go with the MS sanctioned version?

I guess what I'm asking is does it make sense for an nVidia specified HLSL to even exist? The one big pro going for Cg it is that it potentially offers a unified shading language between DX and GL. What other benefits does it bring to the table that an existing HLSL wouldn't? Additionally, if Cg is in fact a subset/superset of DX9 HLSL, wouldn't MS have issues in nVidia offering it to the ARB?

I'm hoping we can discuss the merits and weaknesses of Cg and other HLSL's without this thread degenerating into a "nVidia is evil and wants to control the world" or "Cg is the best thing since sliced bread" slugfest.
 
I don't think anyone outside of nVidia and perhaps a few developers really know enough about Cg at this point to give qualified answers to these questions and of course the nVidia folks can only give you the PR-approved BS.

First of all, I don't think the comparisons with Renderman are really applicable, at least from a game development perspective. Sure you could take Renderman SL and do what you want with it, but its mainly aimed at offline rendering, so I'm not sure who these "high-end developers" are who will have to "port thousands of lines of code."

As for the "low-end developers," I think its safe to assume they'll go with the lowest common denominator, or possibly the easiest to implement. Cg being an extension or subset of DX9 HLSL should simplify support. I would eventually expect developer support and sheer ubiquitousness would lead to a convergence/solidification on the DX standard HLSL, but maybe not until DX10 or 11. MS getting its way usually sounds like a bad thing, but in this case I think any kind of standard not established by the chip makers would be a step in the right direction.

Also, I don't think you can analyze Cg without seeing the business side of it, where its at least a semi-proprietary standard.
 

First of all, I don't think the comparisons with Renderman are really applicable, at least from a game development perspective.


Right, I don't see game developers necessarily writing their shaders in Renderman. That's why I kind of broke it up into two separate camps.


Sure you could take Renderman SL and do what you want with it, but its mainly aimed at offline rendering, so I'm not sure who these "high-end developers" are who will have to "port thousands of lines of code."


Given nVidia's recent acquisition as well as their focus in the NV30 pixel shader implementation, it seems to me that one of the things they are doing is targetting the offline rendering market. These guys already have established HLSL's (MayaSL or Renderman or whatever) and they already have complex shaders written for various custom effects. The point is that these shaders can run into thousands of lines of code. That's where my question comes in. If nVidia is targetting their HW for accelerating off-line rendering, shouldn't they also have an automated tool for compiling the existing Renderman shader to NV30 specific code? Why bring Cg into the picture? What benefit does it offer to the developer of these shaders?
 
I think one of the purported benefits is one language to target two real-time rendering systems (DirectX and OpenGL).

I'm not exactly sure how renderman fits into this, either.

(p.s. Best of luck not having this thread degenerate.)
 
Back
Top