'CineFX' Conference Call

Dave Baumann

Gamerscore Wh...
Moderator
Legend
Sorry to those that don't want to see another NV30 thread; although technically this isn't about NV30 as a product but the CineFX architecture. As you know NV have been giving a few conference calls and I was involved on the same one NVMax was yesterday, I just wasn't sure on embargo/NDA details until now and I've been given the go ahead to talk about it.

I'm not going to reiterate the mainstay of the presentation at this time as it's the same as we've heard about from other people who've been on a CC and much of it is public information from SIGGRAPH already - I'll save it for the actual NV30 preview/review. However, I asked a number of questions at the end (actually, I was the only one asking Q's so I hope the others we're eager to get off!) and there was a few details that I feel were new.

I knew that there weren't going to speak of product information or detailed specs, but I did question the their HOS methodology - if we remember the presentation on the web a few days ago in which showed the 'CineFX' architecture to be programmable all the way through they had a '+Displacement Mapping' unit in front of the Vertex Shader; I asked if this (as it was green, indicating programmable) could relate to a 'Primitive Processor'; Geoff Ballew said that he wouldn't talk specifically about it but as I had pointed out this stage is programmable indicating it may be used for than just the HOS that's exposed in DX9.

Going on to DX9 there were a couple of references to how CineFX goes beyond 'pure DX9' - the way they were talking about this and R300 in relation to it indicates that they feel R300 is a 'pure DX9' part, so I don't think there will be concerns over MS shifting the spec so much that R300 will not be a DX9 part by the time DX9 is realeased.

However, when talking about DX9 and it limitations in relation to CineFX I learnt an interesting detail about Cg. I asked that even though CineFX goes beyond DX9, because Cg creates (DX/OGL) assembler if DX is used then isn't the developer going to be limited by this? Geoff explained that this may not in fact be the case - if the Cg Run-time compiler is used it doesn't necessarily compile DX/OGL assembly but compiles directly to the hardware thus circumventing any potential limitations in the current API's. I did a bit of a double take there since this is new information on the capabilities of Cg; If the runtime compiler is used the CG compiler will compile directly to the capabilities of the of the underlying NV hardware, if its used on a non-NV board is used (assuming a compiler is in place) then it will default to generating DX9 (or OGL assembly). If the non-runtime compiler is used (i.e. the game code ships with the compiled assembly) then only DX or OGL assembly will be used.

I also asked about there submission of Cg to the ARB for consideration as OGL2's HLSL. My thought was is that with GC being more or less DX9 HLSL if its submitted to the ARB then wouldn't the ARB want to control it for future updates hence this could represent a slight fragmentation between the two (OGL and DX Cg) because two different bodies would be attempting to control it - Geoff agreed that this could happen, however NV would attempt to make it high level enough that it could fit both with few changes.

One last thing we talked about was the use of specific graphics hardware for high end rendering. In the presentation Geoff said that 'offline' rendering will always be faster - the point of this is that you can easily scale offline rendering by throwing more CPU's / boxes at it. I asked Geoff "but what if we scale the graphics processors" and he agreed that there could be economies of scale/performance from scaling multiple graphics processors as opposed throwing more CPU power at it...
 
if the Cg Run-time compiler is used it doesn't necessarily compile DX/OGL assembly but compiles directly to the hardware thus circumventing any potential limitations in the current API's.

Seems to me that this is something that was bound to happen sooner or later. And now it seems to be sooner rather then later :)

I mean, what do we need the extra API layer for in this case ?
 
DaveBaumann said:
However, when talking about DX9 and it limitations in relation to CineFX I learnt an interesting detail about Cg. I asked that even though CineFX goes beyond DX9, because Cg creates (DX/OGL) assembler if DX is used then isn't the developer going to be limited by this? Geoff explained that this may not in fact be the case - if the Cg Run-time compiler is used it doesn't necessarily compile DX/OGL assembly but compiles directly to the hardware thus circumventing any potential limitations in the current API's. I did a bit of a double take there since this is new information on the capabilities of Cg; If the runtime compiler is used the CG compiler will compile directly to the capabilities of the of the underlying NV hardware, if its used on a non-NV board is used (assuming a compiler is in place) then it will default to generating DX9 (or OGL assembly). If the non-runtime compiler is used (i.e. the game code ships with the compiled assembly) then only DX or OGL assembly will be used.

I talked about this possibility some threads ago and people did not believed that :rolleyes:

edited: This is one very good reason for the others IHVs not use the Cg compiler. See this thread http://www.beyond3d.com/forum/viewtopic.php?t=1764&start=0. Try to filter the noise.
 
This is where things get a little convoluted as far as I can see.

IHV's can ship their own compilers I believe, but it may be in thei interests not to. The runtime compiler must be part of the driver package (its not going to ship with the OS) so if you don't have an NV card and the IHV doesn't supply a compiler how is the code going to be compiled?

However, the comments that Cg = DX9 HLSL could bring in another factor - exactly how interoperable are they? i.e. could Cg code be used in the DX9 compiler meaning that people with NV cards have the code compiled via NV's compiler direct to the hardware and people without NV cards the code is compiled via DX9 and into DX assembly?
 
DaveBaumann said:
meaning that people with NV cards have the code compiled via NV's compiler direct to the hardware and people without NV cards the code is compiled via DX9 and into DX assembly?

The next question is then (and i think this has been mentioned here a couple of times), can they implement their own DX9 HLSL compiler that overrides the generic one (runtime compiler) ?

Say that DX9 ships with a generic runtime compiler and the DX9 driver
that comes with the graphics card overrides that one.
 
That is a terribly good question to pose to them:

"If the runtime compiler can circumvent DX/OGL, is this available to other IHVs also?"

I'd be disappointed (and suprised) if the answer was no.
 
Bjorn said:
pascal said:
edited: This is one very good reason for the others IHVs not use the Cg compiler. See this thread http://www.beyond3d.com/forum/viewtopic.php?t=1764&start=0. Try to filter the noise.
I agree, i wouldn't use Nvidias Cg compiler if i were them.
But, they're free to implement their own Cg compiler aren't they ?
Why should any IHV use it? The language is still proprietary (evolution controlled by nvidia). The compiler they will have to develop and try to follow nvidia. The real work is all in the multimillions dolars they put in the hardware&algorithms R&D. Many people can write their on C version of some HLSL language :rolleyes:

edited: just because someone write an few pages of BNF of a modified C should any IHV give their market to nvidia :LOL:

For M$ the DX9 and the HLSL is all about control, market share, etc...

Suppose OpenGL has its own open HLSL, do we need the low level API? will we need DX9?

edited: IMHO we need an open non proprietary standard HLSL.
 
Well, if everone starts using their own compiler, whats in it for MS? they'll just end up developing an API that gets bypassed!
 
Let's start out by assuming that the Cg language is identical to DX9 HLSL as nVidia keeps saying.

Let us further assume that DX9 has a runtime compiler method HLSL_Compile();

Finally, let's assume a developer writes a game that uses DX9 HLSL shaders and calls HLSL_Compile() at runtime to compile them.


Then we have the following situation:

1) on non-NV30, DX9 HLSL_Compile() is called and it uses Microsoft's generic compiler. The output is standard DX9 vertex/pixel shaders.


2) on NV30, NVidia "hooks" HLSL_Compile() and uses Cg's more optimized compiler to do the compilation. The output is not standard DX9 vertex/pixel shaders, but a proprietary vertex/pixel shader format that maps close with the NV30 hardware. Finally, NV hooks the DX9 vertex/pixel shader assembler calls and compiles these special instructions.


In this case, the usage of Cg will be transparent to the developer and Cg's compiler will accelerate all DX9 games whether or not the developer actually used the "Cg language" (can it really be called separate if it is in fact, the same grammar as DX9 HLSL)

Scenario #2:

(more likely)
o Developer writes standard DX9 HLSL shaders.
o Developer invokes generic MS compiler and generates vertex/pixel shaders
o Developer invokes CG compiler and generates NV30 specific code
o Developer adds switch to his code

if(NV30)
bind_nv30_specific_assembly_code()
else
bind_generic_dx9_assembly_code()


Think of this situation like C. One language, but multiple compilers. You choose MSVC, Intel's compiler, GNU C, etc based on the target platform.

As I always said, Cg is just a tool, no different than NVidia's NVasm/NVparse.
 
DemoCoder said:
if(NV30)
bind_nv30_specific_assembly_code()
else
bind_generic_dx9_assembly_code()

I'm guessing that the compiler is a COM object, so there's no explicit "if(NV30)".
 
RussSchultz said:
That is a terribly good question to pose to them:

"If the runtime compiler can circumvent DX/OGL, is this available to other IHVs also?"

I'd be disappointed (and suprised) if the answer was no.

Why would you be "surprised" if the answer was no? I mean, it'd be nice if Nvidia allowed other IHVs to do that, but I don't see any real reason they would.
 
Nagorak said:
Why would you be "surprised" if the answer was no? I mean, it'd be nice if Nvidia allowed other IHVs to do that, but I don't see any real reason they would.

Actually, i don't really see how Nvidia can stop other IHV's from doing that.

First of all, other IHV's can make their own Cg compilers.
So, nothing stops them from doing the same thing as NVidia supposedly is doing, going directly to the hardware.

Now take the R9700 as an example:

Add a flag for standard compiling (optimized for R9700) and compiling directly for R9700 (to use Democoders example: generic_r9700_assembly). And of course, if you use the compiler at runtime, then always compile to generic_r9700_assembly.

Now, the only thing that Nvidia now can stop them from doing is to use this compiler at runtime.

"Hey, you can make your own compiler, but don't you dare use it at runtime, or else..."

Hmm, kinda doubt that.

Think of this situation like C. One language, but multiple compilers. You choose MSVC, Intel's compiler, GNU C, etc based on the target platform.

Agree completely.
 
I'm with Bjorn.

Anyway, we have been over this before about the Cg-feature of a run-time compile directly to hardware since it's a key part of Cg (profiles).

I happen to think that it's a brilliant move to be able to utilize features on almost brandnew hardware instead of having to wait for game developers to update and tweak their engines of better shader-performance as new hardware comes out.

Please note that Cg also allows for an install-time compile, where the application figures out the capabilities of the platform (profile) and compiles the shaders accordingly. This is a great feature IMO.

I'm well aware that this won't work if nobody writes top-notch profiles and/or compilers for hardware other than nVidias, but as we have discussed before, I think it'll all come down to whether or not developers will embrace Cg and promote it to a true standard.

If it turns out to be a standard, it would be crazy of ATI and other vendors to ignore it. Simple as that.

I'm sorry for being a bit pro-Cg, but it seems like the best toolkit right now to promote faster implementation of shaders in games.

Edit: nVidia just released the source code for the Cg compiler with a "generic" profile.
 
I'm a bit out of my depth here, so please bare with me.

Are we really talking about compiling HLSL on demand during an application? For mickeymouse HLSL this won't be a problem, but for anything serious, won't this be a severe overhead? Surely all this stuff should be pre-compiled?
 
You can compile the shaders at runtime but in your 3d engine init phase, so I believe the overhead will be negligible.

ciao,
Marco
 
pascal said:
Bjorn said:
I agree, i wouldn't use Nvidias Cg compiler if i were them.
But, they're free to implement their own Cg compiler aren't they ?
Why should any IHV use it? The language is still proprietary (evolution controlled by nvidia).

Lets cross that bridge when we get to it. At the moment as far as we know its a pretty straight knock off of the DX9 HLSL, which pragmatically is clearly the best language to choose for a platform independent shading language.

The compiler they will have to develop and try to follow nvidia.

No they dont, they have no obligation to do so ... no more than NVIDIA would have to follow them if they split off from the DX9 HLSL in a different direction than them.

For M$ the DX9 and the HLSL is all about control, market share, etc...

So? They HAVE market share ... ignoring that for the hell of it is childish and counterproductive.

edited: IMHO we need an open non proprietary standard HLSL.

DX9 HLSL is more open and non proprietary than OpenGL 2.0's shading language!
 
Back
Top