'CineFX' Conference Call

1. Cg does not bypass APIs. Period. Any other implication is false. This has never been part of the plan.

Please, tell this to or ask this of Geoff Ballow, NV30's product manager - read the first post in this thread as this is what Geoff has told me.
 
Wouldn't it be more optimal to compile the HLSL not at load time but when installing an application (as in a game)? Optimizing the game lets say for the hardware found upon installation. In other words it only gets compile once depending on your hardware upon installation of program. This would eliminate having numerous .exe files to sort through as in NV30 or R300 or NV25 or NV20 or R200 or SIS or Matrox or 3DLabs so that the vertex shader programs work right.

Wouldn't it also be better if the compiler was part of DX9 itself where the HLSL would be compiled? In which any program can use this complier upon the fly when installing a new program so as to get optimal code for what ever hardware is available? Allowing for easy updates as time goes on, allowing for a progression of the language. Allowing for a recompile for the shader programs in a game when a new more efficient compiler is released by MS lets say in the DX Diagnostic tool, supporting your new hardware better.

This is really more in line with a operating system and a API then a separate entity. Game developer uses standard shader language in his game and lets the operating system API DX9 in this case compile the shader programs optimally for the given hardware on hand. Also allowing for each IHV to incorporate new syntax as needed.

Is CG public domain or industry domain to add functions and libraries too as they seen fit, or is the language locked by Nvidia which has a subset of Microsoft HLSL?
 
More Facts

Dave, Geoff B is wrong as has been corrected.

Guest, you forget the 'profile' aspect of Cg and HLSL. This is the key to the compiler vs. language point.

Cg, the language, has the reserved keywords and syntactic rules to support primitive programs, full per-pixel object oriented branching, programmable frame buffer blending, application callbacks, etc.

The currently shipping "profiles" for the Cg compiler, however, support dx8 & ogl vertex & pixel shaders. The compiler source was shipped yesterday ( www.nvidia.com/developer ), so any IHV or ISV that wants something that the current compiler "profiles" don't support can add it, such as displacement maps, although this should be added for the dx9 vertex profile.

The current shipping Cg compiler is sub-optimal for vertex shaders, but beats 95% of coders at dx8/ogl pixel ( through nvparse ). The version shipping in October is supposed to be much better at this.

Also, at Siggraph, 500 developers attended free Cg labs, where each had a computer and actually coded shaders using it. Probably 150 other developers from the game, tools and off-line communities worked hands-on with an earlier version of Cg. To think that NIVIDA came up with this behind closed doors with no feedback is crazy. Microsoft and NVIDIA had numerous developers give feedback from syntax to general philosophy ( "make it like C!" ) for months. Just because it wasn't uploaded publicly does not imply that feedback wasn't accepted from many respected graphics companies for months.

Another point, programs can be compiled at runtime ( using d3dx in dx9, or NVIDIA's cg compiler ), or off-line. I think 90% of developers will do off-line, but on-line is a nice way to assemble shaders on the fly for certain effects.

As far as tool integration goes, NVIDIA and tool employees busted their asses to get this ready and give this away for free to the community - and it works. Certainly this effort could be duplicated for other languages, and thanks to the initial work it will be much easier for others to do so.

It's not important that Cg become the only way to write high-level shaders for current & future hardware. The point is that Cg is a standard now, like it or not, and it's here and it works, and it's improving.
 
As far as tool integration goes, NVIDIA and tool employees busted their asses to get this ready and give this away for free to the community - and it works.

That is not what Nvidia says:
NVIDIA maintains the Cg Language Specification and will continue to work with Microsoft to maintain compatibility with the DirectX High Level Shading Language (HLSL).

The Cg Language Specification is published and is open in the sense that other vendors may implement products based on it. Sounds like Nvidia pretty much has total control over Cg
http://www.nvidia.com/view.asp?IO=cg_faq

This is not free and given away. Thats ok, the more Nvidia trys to control the more isolated they will become. I predict ATI will make major inroads in market share on new computers sold, beating out Nvidia. What does this have to do with Cg, well it needs to be totally open not controlled by one company with their own interests at stake, except unless Cg is controlled by a neutral company or a manufacturer committee. If Microsoft controlled the language and compiler then it would be in their own interest to support all manufacturers while for Nvidia that is not the case to support their competitors. The idea around Cg is good, the implementation coming from Nvidia is lacking as far as I see it.
 
Ranned Speedtree demo, which used Cg to speed up getting the application out. Now will this demo run on any other card besides GF3's and GF4's? Please let us know. It is way cool to look at, very realistic trees indeed with a mech shooting missiles at something :).

http://www.idvinc.com/speedtree/

Now if it doesn't run on other hardware wouldn't you think that would be a big problem?? Anyone?
 
Re: More Facts

MrNiceGuy said:
Dave, Geoff B is wrong as has been corrected.

Forgive me - I don't want to be accused of spreading misinformation but I have double checked the tape I took of the call and I am correct that this is what he has said.

However, how are we supposed to believe what your saying is correct? I'm sure that idea didn't just pop into his head and I'd assume the product manager of NV30 having a reasonable level of knowledge of whats occuring and what's occuring.

Now, perhaps I'll ask you the same question as I asked him: If you are are compiling to DX does this mean that you are going to be limited to DX9 capabilities even though CineFX may be beyond those?
 
Re: More Facts

DaveBaumann said:
Now, perhaps I'll ask you the same question as I asked him: If you are are compiling to DX does this mean that you are going to be limited to DX9 capabilities even though CineFX may be beyond those?

Since you would always compile with a profile "pluged" into the compiler it would be up to that specific profile - DX8, DX9 or DX9revNV30 - to define what the (underlying) capabilities is as far as I understand. So with the DX9revNV30_profile you will get CineFX capabilities available through the DX9 HLSL (although it's uncertain to me how the DX9 API will allow this).

But this lead to another question for MrNiceGuy: Let's say the developer decide to compile the shaders off-line would I be right to assume that you could plug in a number of profiles of different hardware, compile them one after another and make them all available for the game engine to choose amongst at init?
 
...

Dave, he meant what he said, but he was wrong. Geoff is not the 'Cg product manager'.

LeStoffer, last I heard dx9 will have cap bits to expose most of CineFX. There will have to be one or more profiles for this - probably initially one for generic dx9 with minimal functionality ( r300 ) and another for longer shaders, derivative calculation, etc. ( nv30 ).

And yes, a developer will probably compile one profile for the generic dx8 or dx9 level of functionality, then one or more special profiles for fallbacks for older hardware and one or two for more advanced hardware. This is easy to do using .fx or CgFX.
 
MrNiceGuy:
And yes, a developer will probably compile one profile for the generic dx8 or dx9 level of functionality, then one or more special profiles for fallbacks for older hardware and one or two for more advanced hardware. This is easy to do using .fx or CgFX

Wouldn't it be more efficient to compile the shader programs upon installation of the game based on hardware available? This would allow:
  • 1. Updated compilers for hardware to be used such as a R300 optimized compiler, Matrox Parhelia optimized compiler etc. to be used when they become available vice being locked in by what is released on disk hindering non optimized GPU's/VPU's.

    2. Allow for updates and optimized versions subsequently without having the game developer constantly having to update their games when more advance hardware as in beyond NV30 becomes available a.k.a R400.

    3. Allow a recompile with possible different configurations from end user to trouble shoot problems encountered on installation of games.

    4. Allow better feedback to all companies involved in tracing and correcting errors in their Cg compilers.
What you said really doesn't sound too useful and not the best interest for the industry as a whole. My opinion.
 
Please this hilarious Cg thing is full of marketing for CONSUMERS :LOL:

Most certainly IHVs will use the M$ and ARB HLSL.
 
About Cg, I don't think this is something I read. If the developer decided to compile offline for all the different IHVs, would they then need to ship a bunch of different execs. and then it depends on what vid. card you have on which will be installed? Could someone clarify this alittle for me?
 
About Cg, I don't think this is something I read. If the developer decided to compile offline for all the different IHVs

No what he is saying is that Cg will compile to *generic* standard DX9 or Dx8 as well as more complex routines. Seems Dx9 itself is now going to be released with full support for CinFX.
 
Well generic as in using pixel shader 1.1 vice pixel shader 1.4 for the Radeon8500 would be contra productive to say the least in getting the most out of the card. Generic for the P10 and for the Parhelia would be a complete waste of available hardware. Three ways I can see it done:
  • 1. Compile the code in the game with multiple paths which would be limited at the time of compiling the available Cg compilers for specific hardware. Con-your hardware may not be optimize for, resulting in less then optimum results.

    2. Compile during run time for the hardware at the time, this could over time improve the performance in your game as more optimized compilers become available for your hardware with no hassles on your part. Also the game developers wont have to continuely update there exec files to support hardware when added or new hardware down the road. Con - An extra burden will be placed on loading each shader program causing a slowdown or pause when playing your game.

    3. Compile during installation with option to recompile with updated compiler technology. Seems to be the best path due to no overhead during game play.

Worst case to me seems like #1 above, best is #3.
 
noko,

Well generic as in using pixel shader 1.1 vice pixel shader 1.4 for the Radeon8500 would be contra productive to say the least in getting the most out of the card. Generic for the P10 and for the Parhelia would be a complete waste of available hardware. Three ways I can see it done:

This is why all these peple are saying that each IHV will have to customize for their own hardware. Which we know is NOT going to happen. Thus again proving my point that the final code is going to be kicked out favoring whatever flavor of DX that Nvidia is currently mainlining. PS 1.1 for their DX8 hardware and PS2.0+ for the Nv30.

This is again why i have stated from the begining that Cg is a Good idea that needs to come from a totally NEUTRAL source. How many times does it have to be said that other IHV's are not going to support Cg.
 
How many times does it have to be said that other IHV's are not going to support Cg.

Yes that is likely but Microsoft HLSL will probably be supported by the many due to survival as well as the ARB HLSL for OpenGL. This I see is a necessary step to get the more advance hardware used to its potential. Support for Cg is still limited mainly because Nvidia imposes some interesting restrictions on its use LOL :LOL: read below:
SINGLE COPY LICENSE
The materials at this Site are copyrighted and any unauthorized use of any materials at this Site may violate copyright, trademark, and other laws. You may download one copy of the information or software ("Materials") found on NVIDIA sites on a single computer for your personal, non-commercial internal use only unless specifically licensed to do otherwise by NVIDIA in writing or as allowed by any license terms which accompany or are provided with individual Materials. This is a license, not a transfer of title, and is subject to the following restrictions: you may not: (a) modify the Materials or use them for any commercial purpose, or any public display, performance, sale or rental; (b) decompile, reverse engineer, or disassemble software Materials except and only to the extent permitted by applicable law or unless specifically licensed to do otherwise by NVIDIA in writing or as allowed by any license terms which accompany or are provided with individual Materials; (c) remove any copyright or other proprietary notices from the Materials; (d) transfer the Materials to any other person or entity. You agree to prevent any unauthorized copying of the Materials.
http://developer.nvidia.com/view.asp?IO=legal_info

Yep, ATI can pick one computer out of many with all their software engineer huddle around it to do what? not a thing :cry:.

What are the licensing fees for Cg?
 
noko said:
Ranned Speedtree demo, which used Cg to speed up getting the application out. Now will this demo run on any other card besides GF3's and GF4's? Please let us know. It is way cool to look at, very realistic trees indeed with a mech shooting missiles at something :).

http://www.idvinc.com/speedtree/

Now if it doesn't run on other hardware wouldn't you think that would be a big problem?? Anyone?

Doesn't work on a Gigabyte 8500 with the latest Catalysts....

gives all kinds of errors like

Missing Vertex Extension...

so it looks like its streamlined just for nVidia cards right now
 
Doesn't work on a Gigabyte 8500 with the latest Catalysts....

gives all kinds of errors like

Missing Vertex Extension...

so it looks like its streamlined just for nVidia cards right now

Thanks for taking the time to test it out.

That is what I thought, the first Cg coded program won't work on hardware other then Nvidia GF3/4. Obviously it is not the industry standard but a Nvidia standard as it sits now. I doubt it will progress beyond Nvidia, mainly due to Nvidia themselves.

I hope Microsoft or someone else will pick up where Nvidia fails to lead and supports the industry not just themselves, hmmmm microsoft, sounds pretty contradictory.

Just think, if the Speedtree program is used in games using CG there will be alot of angry non Nvidia card holders returning a program that will not run as it sits. This is not making a very good impression so far.
 
...5. Cg is integrated directly into the upcoming versions of Max, Maya, XSI and other tools. That alone is a HUGE boost to the entire realtime 3d community - an artist can view & tweak hardware shaders in real time, in the tool they are already familiar with. The exact same shader can run in the tool & in the game engine. This was demonstrated at Siggraph and is currently in beta. ...

To concur, this will be a huge advantage to artists/game architects. I hope we see the fruits of this level of utility soon...
 
Back
Top