So far I've been keeping out of the Cg debate simply because I don't know enough about the language or its limitations to form an educated opinion on whether it's a good or bad thing for the industry. However, I have been giving some thought to where nVidia is perhaps hoping to position the NV30 (as a high end renderfarm accelerator) -- and how Cg fits into that environment.
I'm hoping to get some clarification from people here who are familiar with writing shaders in other HLSL's such as Renderman or Maya Shading Language. I really don't know much about these, but I know that some people here (DemoCoder?) have done extensive Renderman coding. My question is: How difficult/time consuming is it to convert existing Renderman or Maya programs to Cg?
Given the existing investment in Renderman shaders, why would a high end developer want to learn a new language and port thousands of lines of code to Cg? Does Cg offer benefits that Renderman or Maya SL's don't? Doesn't it make more sense for nVidia to be pushing their own version of a tool like RenderMonkey which will convert these existing established languages to something that will compile on their HW?
So from a high-end perspective, it doesn't make sense to me why you would want to go Cg. From the low-end, game developer perspective, Richard Huddy's comments (as well as nVidia's), make it sound very much like DX9 HLSL is either equivalent to Cg or is Cg itself. As such, why wouldn't vendors just go with the MS sanctioned version?
I guess what I'm asking is does it make sense for an nVidia specified HLSL to even exist? The one big pro going for Cg it is that it potentially offers a unified shading language between DX and GL. What other benefits does it bring to the table that an existing HLSL wouldn't? Additionally, if Cg is in fact a subset/superset of DX9 HLSL, wouldn't MS have issues in nVidia offering it to the ARB?
I'm hoping we can discuss the merits and weaknesses of Cg and other HLSL's without this thread degenerating into a "nVidia is evil and wants to control the world" or "Cg is the best thing since sliced bread" slugfest.
I'm hoping to get some clarification from people here who are familiar with writing shaders in other HLSL's such as Renderman or Maya Shading Language. I really don't know much about these, but I know that some people here (DemoCoder?) have done extensive Renderman coding. My question is: How difficult/time consuming is it to convert existing Renderman or Maya programs to Cg?
Given the existing investment in Renderman shaders, why would a high end developer want to learn a new language and port thousands of lines of code to Cg? Does Cg offer benefits that Renderman or Maya SL's don't? Doesn't it make more sense for nVidia to be pushing their own version of a tool like RenderMonkey which will convert these existing established languages to something that will compile on their HW?
So from a high-end perspective, it doesn't make sense to me why you would want to go Cg. From the low-end, game developer perspective, Richard Huddy's comments (as well as nVidia's), make it sound very much like DX9 HLSL is either equivalent to Cg or is Cg itself. As such, why wouldn't vendors just go with the MS sanctioned version?
I guess what I'm asking is does it make sense for an nVidia specified HLSL to even exist? The one big pro going for Cg it is that it potentially offers a unified shading language between DX and GL. What other benefits does it bring to the table that an existing HLSL wouldn't? Additionally, if Cg is in fact a subset/superset of DX9 HLSL, wouldn't MS have issues in nVidia offering it to the ARB?
I'm hoping we can discuss the merits and weaknesses of Cg and other HLSL's without this thread degenerating into a "nVidia is evil and wants to control the world" or "Cg is the best thing since sliced bread" slugfest.