'CineFX' Conference Call

Hi Mfa :)

pascal wrote:
Bjorn wrote:
I agree, i wouldn't use Nvidias Cg compiler if i were them.
But, they're free to implement their own Cg compiler aren't they ?

Why should any IHV use it? The language is still proprietary (evolution controlled by nvidia).


Lets cross that bridge when we get to it. At the moment as far as we know its a pretty straight knock off of the DX9 HLSL, which pragmatically is clearly the best language to choose for a platform independent shading language.

I am not crossing any bridges, only stating the facts. Do you have any official word that Cg language is the DX9 HLSL language without ANY change? My guess you are crossing the bridge too son.

Quote:
Pascal: The compiler they will have to develop and try to follow nvidia.


No they dont, they have no obligation to do so ... no more than NVIDIA would have to follow them if they split off from the DX9 HLSL in a different direction than them.
Again you suppose that Cg is the DX9 HLSL. Nobody has any obligations, only market pressure.

Pascal: For M$ the DX9 and the HLSL is all about control, market share, etc...


So? They HAVE market share ... ignoring that for the hell of it is childish and counterproductive.
I am not the one ignoring that or its (good and bad) effects but some people are ;)

Quote:
Pascal edited: IMHO we need an open non proprietary standard HLSL.


DX9 HLSL is more open and non proprietary than OpenGL 2.0's shading language!
Was the OpenGL HLSL already defined? Well DX9 HLSL is M$ property. Can any IHV have a compiler to the metal for it in any OS?

I only want more competition in the market with IHVs competing with high quality hardware/software and price. edited: I dont want/like more small tricks to lock the market with Intelectual Property that has no great value. The real value are there in the hardware, algorithms and specifications ;)
 
pascal said:
I am not crossing any bridges, only stating the facts. Do you have any official word that Cg language is the DX9 HLSL language without ANY change? My guess you are crossing the bridge too son.

It still does not matter, if there are some differences you dont want to follow you dont (of course if the best reason they can come up with was that it has to be different from NVIDIA then I would frown on it). Its all going to be close enough to DX9 HLSL and to eachother not to cause developers many headaches, and thats the point for basing it on DX9 HLSL in the first place.

Well DX9 HLSL is M$ property.

About as much as you are the property of Niklaus Wirth.
 
MfA said:
pascal said:
I am not crossing any bridges, only stating the facts. Do you have any official word that Cg language is the DX9 HLSL language without ANY change? My guess you are crossing the bridge too son.

It still does not matter, if there are some differences you dont want to follow you dont (of course if the best reason they can come up with was that it has to be different from NVIDIA then I would frown on it). Its all going to be close enough to DX9 HLSL and to eachother not to cause developers many headaches, and thats the point for basing it on DX9 HLSL in the first place.

Then lets go for the DX9 HLSL.
And it is not only the technical side, think about the Mindshare (Cg compliant, Cg compatible in the games Box, etc..). No IHV will want that.

MfA said:
pascal said:
Well DX9 HLSL is M$ property.

About as much as you are the property of Niklaus Wirth.
My guess it is more than that. Again, can any IHV write a DX9 HLSL compiler to the metal for any OS?
 
Unless they sign the right to do so away in a contract, sure. The only way intellectual property law could stop them is if m$ had an essential patent needed for the implementation of a compiler, which is pretty much impossible.
 
Or if the others IHVs have some essential patent needed for algorithms/technology.

Or maybe they will never do it afraid of ...

Humm, I think I need more information to discuss about it and maybe a lawyer´s advice :)
edited: english correction :oops:
 
Pascal,
Developers are going to be using the DX9 HLSL, but when it comes time to compile their code, they will invoke Cg's compiler to get NV30 specific optimized output and MS's compiler for everyone else.

This is no different than writing a C program, but using MSVC or Intel's compiler on the x86 architecture, and using HP's PA-RISC compiler on HP (instead of GNU C)

GNU C is analogous to the "generic" cross compiler that produces pretty good cross platform code, but is not optimal on each platform.


Here is a developer scenario:

#1 developer writes a lot of HLSL fragment shaders, in file DX9.hlsl

#2 developer invokes DX9 compiler to generate PS2.0 shaders for all DX9 cards, saved in file main.ps

#3 developer invokes Cg compiler to generate optimized PS2.0 shaders for NV30's specific instructions and instruction scheduling idiosyncracies, file is main_nv30.ps

#4 On 3D engine init, developer checks which card is available.
If NV30, main_nv30.ps is used. If not, main.ps is used.


There is no way you are going to stop a vendor from producing their own compiler for HLSL. Cg HLSL is close enough (or identical to) DX9 HLSL that it doesn't matter as far as development is concerned.


Every C/C++ compiler in existence has custom extensions to the C/C++ language or missing implementation. MS Visual C for example has a huge number of "underscore underscore" modifiers and #pragma's. Ditto for GNU C, and others.

That's why cross-platform code in general for C/C++ has a huge number of C-preprocessor directives usually detecting which compiler is available, e.g.

#ifdef __GNU_C
.. define some workarounds

#ifdef __MSVC_
.. define more workarounds

etc


For shaders, optimization is incredibility important. They are analogous to the tight inner loop of any application since they are run millions of times over and over. A 1 cycle savings leaps to HUGE gains. It will be irrestistable to developers to run vendor specific tools to get this performance tweek.
 
Nothing prevent 3DLabs, ATI, x_any_brand to develop their own simple C derivative language (3C, AC, XC) and respective compiler. It is probably much less expensive than all the money they put in the hardware/drivers/algorithms R&D and support.

Then the developer will be very atracted to compile to the metal because of the possible advantages. Then they will have to generate code for each target.

IIRC GNU C has the #inline as part of the language pre-processor. Try to compile it with a standard C compiler. One detail here, another there.

Again I am not against Cg or nVidia, and I am nobody fan.
Well, it will be the developers´ problem, not my :LOL:

Maybe some developers will want to do their own "assembly" and acrobatics optimizations (writing with DX9 or OpenGL).
 
if the Cg Run-time compiler is used it doesn't necessarily compile DX/OGL assembly but compiles directly to the hardware thus circumventing any potential limitations in the current API's.

Imagine that... And the crap i put up with. I got called names by democoder.. made fun of.. ridiculed...

Developers are going to be using the DX9 HLSL, but when it comes time to compile their code, they will invoke Cg's compiler to get NV30 specific optimized output

Talk about hypocracy.

whatever.. Just proves how Nieve some people are. In the end I get the last laugh. As do the few others who saw the true intent of Cg.
 
Hellbinder said:
Developers are going to be using the DX9 HLSL, but when it comes time to compile their code, they will invoke Cg's compiler to get NV30 specific optimized output

Talk about hypocracy.

whatever.. Just proves how Nieve some people are. In the end I get the last laugh. As do the few others who saw the true intent of Cg.


You don't get to laugh at all. There was never any contradiction. As we have been saying over and over again, Cg is just a compiler for DX9 HLSL, and it will have 2 backends: one that produces optimized NV30 code and one that produces generic code. If ATI, 3DLabs, or Matrox want optimized DX9 HLSL, they either have to write their own compiler from scratch, or reuse the open-source Cg front-end and write their own backend.

The last laugh is on you, because after all this time, you still fail to realize that there is nothing NV30 specific in the Cg *LANGUAGE SYNTAX* and in fact, Cg EQUALS DX9 HLSL. It is the COMPILER that makes the difference.
 
You don't get to laugh at all. There was never any contradiction. As we have been saying over and over again, Cg is just a compiler for DX9 HLSL, and it will have 2 backends: one that produces optimized NV30 code and one that produces generic code. If ATI, 3DLabs, or Matrox want optimized DX9 HLSL, they either have to write their own compiler from scratch, or reuse the open-source Cg front-end and write their own backend

That is just flat out dishonest. Not that I am suprised... I am quite used to this from the Nvidia crowd.

Cg is just a compiler for DX9 HLSL, and it will have 2 backends

The above statement is false. Cg is NOT a compiler. It is a language AND Compiler that GETS COMPILED to DX9/OGL or *Nvidia specific hardware*.

The ENTIRE argument you guys were making is that a program wtitten in Cg would not be optomized or run *special* on Nvidia hardware. Well that has proven to be flat out FALSE. The STANDARD Compiler for Cg (not a special one) detects Nvidia hardware and *can* compile it straight to the hardware. Thus the same shader wtitten will compile differently for Nvidia cards than for other vendors. Thus giving ANY shader routine written in Cg a performance advantage on Nvidia hardware. the SAME exact premise as Glide used to be. I just dont see how anyone could deny that now. Most if not all competitors are not going to use an Nvidia based Language. Unless it gets widely adopted and they are forced to do it.

It just kills me how you guys are never held accountable for the things you say. I know what was said, I can pull quotes. No you can not change tune in mid stream and claim that you never said anything like that, or that somehow this is exactly what you were saying the entire time...
 
The last laugh is on you, because after all this time, you still fail to realize that there is nothing NV30 specific in the Cg *LANGUAGE SYNTAX* and in fact, Cg EQUALS DX9 HLSL. It is the COMPILER that makes the difference.

The idea that you can seperate the language from the compiler in this case is not an honest evaluation of the situation, and you damn well know it!!
 
None of us seem to know how HLSL is implemented in DX9, so lets wait for DX9 to come out of beta so we can see exactly how evil Cg is.

Until then, could we quit arguing about things we know nothing about?
 
The ENTIRE argument you guys were making is that a program wtitten in Cg would not be optomized or run *special* on Nvidia hardware. Well that has proven to be flat out FALSE. The STANDARD Compiler for Cg (not a special one) detects Nvidia hardware and *can* compile it straight to the hardware. Thus the same shader written will compile differently for Nvidia cards than for other vendors.

The thing is that it's absolutely NOTHING wrong with this.

The main thing that was discussed was if the actual LANGUAGE was/could be optimized for Nvidia hardware. Not if the compiler was.
Well, there were some discussions about that also but it was a moot point as soon as the compiler was open sourced.
And would still have been if any vendor can make a DX9 HLSL compiler which they probably will.

And do anyone here think that MS would allow Nvidia to compile directly to the hardware but say no to all other IHV's ?

And as has been said here, it'll probably me close to impossible for MS to stop them from doing just that.

Thus giving ANY shader routine written in Cg a performance advantage on Nvidia hardware. the SAME exact premise as Glide used to be.

Nope, not the same advantage. Because Glide wasn't open sourced back then. Nobody could implement their own version of Glide.
In this case, were going to see each IHV having their own compiler. For CG or DX9 (which might be the same thing for all we know) and Open GL2.

Which basically means no advantage at all.
Maybe Nvidia had their compiler ready first but that's an advantage that they deserve then.
 
There have been quite a few issues discussed and most of them remain or have gotten worse.

LANGUAGE
The language itself is still not open source, there has been no discussion about how features can be added to the syntax of the language. This means that if a competitor of NVIDIA creates a really nifty new feature it will not be possible to expose this through Cg since there is no mechanism in place to handle this. Which means we might get Cg dialects which NVIDIA will clamp down on claiming IP on the language syntax, even though its essentially HLSL syntax.

Cg = HLSL
If Cg is completely compatible with HLSL it must be largely equal to it with only minimal differences. If the language are that similar why bother ? The only answer is that NVIDIA might want to remove functionality or add functionality, with other words a sub-set or a super-set. If Cg turns into a super-set we immediatly get a "Glide"-like situation where Cg written programs might never be possible to compiler for competitors hardware.

Cg is layered on top of the existing "standard" APIs
Or at least that is what NVIDIA told everybody. Its now become clear that Cg does not sit on top of the API it can completely bypass the API assembly and go straight to the "NVIDIA" hardware. This is one of the big fears that everybody has had and its now proven to be true. By allowing a direct path to hardware - NVIDIA who owns and controls Cg has an advantage: they have a direct path to optimised hardware calls while competitors are forced to use a compiler to standard calls (which might not support all the functionality needed) which are than again compiled at driver level to the internal hardware format. So 2 compilers versus 1 compiler with the intermediate format locked in the MS assembly. Another big issue with this is : HOW does NVIDIA bypass the API, under OGL this is a nobrainer using a specific LoadCg extension. Under D3D this is something else... Microsoft will not appreciate working around their API layer and it will result in system instability : DX calls with some direct hardware Cg call intermixed. I doubt that Microsoft has had anything to say about allowing NVIDIA to compile directly to hardware.

The compiler is open source/public/etc
No it is not, "the" compiler is not open source, not even "a" compiler is currently open source. The very first step of the offline compiler is currently open source. There has been NO talk about open sourcing the run-time API bypassing compiler source. There has been talk about a generic back-end to be public but it currently is not which means that competitors work, if they will ever bother, is being delayed. Also NVIDIA has only talked about allowing competitors to create compilers that target the standard assembly, nobody has said anything about direct paths or systems to do this.

What should it have been ?
Microsoft and ARB create a high level language. All IHVs are given the format at the same time and are asked to provide feedback. Language is defined. A generic compiler system is offered and all IHVs are given the oppurtunity to write their own compilers.

What has NVIDIA done ?
NVIDIA creates own language - which is the standard MS language with minimal changes - but possible large NVIDIA only additions in the future. NVIDIA claims it sits on top of API, but later reveals it can bypass the API by compiling directly to their hardware. NVIDIA confuses people by saying its all open source, but they do not expose the essential bits : the run-time compiler, the bypass mechanism,syntax/extension mechanism, and the open source releases are delayed as much as possible (if NVIDIA releases Cg 2.0 competitors will only find out in the last minute since NVIDIA owns the Cg syntax). In short its a third horse in a race where there could have been only 2. Also NVIDIA proclaims it a standard even though no other hardware company has given it any support and even before there is a single game that uses it.

Trouble points remain :

- Should it have been there in the first place given HLSL and OGL2.0 ?
- The language is closed and NVIDIA owned: no non-NVIDIA features.
- Compiler is not really open source (bypass, back-end ?)
- Bypasses the API in an unknown way which might cause instability and other problems.
- Future versions of Language might contain NVIDIA only features which can not be compiled through the standard MS assembly, only through unknown bypass or OGL.
- Which compiler does the developer include in its code ? One compiler that fits all or 10 different compilers for all the different IHVs ?
- How many different Cg programs have to be written given the subtle differences between hardware and compiler limitations/bugs ?
- What happens to Cg dialects ?

G~
 
- Should it have been there in the first place given HLSL and OGL2.0 ?

Well, none of the other are available yet are they ?

- Compiler is not really open source (bypass, back-end ?)

Why should Nvidia have to give away a fully working compiler ?
Imo, they don't have to do this at all. As long as they allow other people to make their own compilers.

- Bypasses the API in an unknown way which might cause instability and other problems.

And those problems would be ?
And afa instability goes, why would it lead to more instability then what we already have today ?
Also, i doubt that developers would like to use the Cg compiler if it created large problems with instability.

- Future versions of Language might contain NVIDIA only features which can not be compiled through the standard MS assembly, only through unknown bypass or OGL.

And (as has ben mentioned a lot of times) most developers were quick to go away from Glide as soon as other (good) options were available that were'nt specific to one vendor. Why would they be so eager to get into a similar situation with Cg ?
And i doubt that MS would sit idle while something like this happened since it's not exactly in their best interest.

- Which compiler does the developer include in its code ? One compiler that fits all or 10 different compilers for all the different IHVs ?

You usually don't include a compiler in the code.
Anyway, this is something that you can create a simple makefile for and won't cause the developers that much extra work.
But of course, if one of the venders compiler creates large problems with instability (Cg according to you), then that compiler won't be used that much of course :)

- How many different Cg programs have to be written given the subtle differences between hardware and compiler limitations/bugs ?

Probably none. But perhaps some

#ifdef __GNU_C
.. define some workarounds

#ifdef __MSVC_
.. define more workarounds

code.

There has been talk about a generic back-end to be public but it currently is not which means that competitors work, if they will ever bother, is being delayed.

And what's wrong with that ?
All other IHV's knew about the upcoming DX9 HLSL and Open GL 2 HLSL.
So, they've probably already have (or should have) a compiler for those languages.
And if DX9 HLSL and Cg are exactly the same, well then they should have a compiler for it already.
 
Hellbinder[CE said:
]
You don't get to laugh at all. There was never any contradiction. As we have been saying over and over again, Cg is just a compiler for DX9 HLSL, and it will have 2 backends: one that produces optimized NV30 code and one that produces generic code. If ATI, 3DLabs, or Matrox want optimized DX9 HLSL, they either have to write their own compiler from scratch, or reuse the open-source Cg front-end and write their own backend

That is just flat out dishonest. Not that I am suprised... I am quite used to this from the Nvidia crowd.

What is dishonest about it? You're claiming I am lying around something here? Just what I am lying about? Please, feel free to quote my past posts. I have been consistently saying the following things:

#1 Cg "the LANGUAGE" is not NV30 specific. It is nothing more than a general purpose C-like language with your standard complement of datatypes and flow control statements. I challenged you over and over to point out something hardware specific in the NV30 Cg language.

#2 Cg "the compiler tool" will have a single front end parser (parses Cg "the language", and multiple backend code generators, one of which spits out generic DirectX and OpenGL vertex/fragment shaders, and the other which spits out optimized code for NVidia products. I have never wavered from this, and I don't see what's wrong with it.

#3 That Cg IS/WILL BE DX9 HLSL.

HellBinder said:
Cg is just a compiler for DX9 HLSL, and it will have 2 backends

The above statement is false. Cg is NOT a compiler. It is a language AND Compiler that GETS COMPILED to DX9/OGL or *Nvidia specific hardware*.

The ENTIRE argument you guys were making is that a program wtitten in Cg would not be optomized or run *special* on Nvidia hardware. Well that has proven to be flat out FALSE. The STANDARD Compiler for Cg (not a special one) detects Nvidia hardware and *can* compile it straight to the hardware. Thus the same shader wtitten will compile differently for Nvidia cards than for other vendors. Thus giving ANY shader routine written in Cg a performance advantage on Nvidia hardware.

No, the argument we were making is that Cg "the language" is not NV30 specific. We have NEVER said that Cg compilers won't spit out optimized code for the NV30. We have always said it will have multiple backend output and we have always claimed that NVidia's NV30 backend will be more optimal than the generic backend. The fact that the same shader will compile differently for different platforms IN NO WAY logically implies an NV30 Advantage anymore than the fact that the compiler can output both OGL and DirectX code guarantees an advantage. It all depends on the compiler technology implemented by each vendor. Right now, in DirectX8, every device driver compilers vertex shaders differently. NVidia's detonators and ATI's Catalyst each compile DirectX8 vertex shader code into an internal format and perform optimizations.

The same will be true for DX9 HLSL. If you accept Microsoft's default compiler, then you accept the optimization choices that MS programmers made for the lowest common denominator. If ATI implements a better DX9 HLSL compiler than NVidia, then the same DX9 HLSL shader could perform better on the R300 than it does on the NV30, even if the NV30 had superior per-clock execution performance.

There is no difference in this situation than there was with device drivers. If ATI's device drivers suck, they will have inferior performance, regardless of the hardware. Likewise, if ATI's compiler optimizations suck for the available high level languages (OGL2.0.3Dlabs, DX9 HLSL/Cg), they will likewise lose.

HellBinder said:
the SAME exact premise as Glide used to be. I just dont see how anyone could deny that now. Most if not all competitors are not going to use an Nvidia based Language. Unless it gets widely adopted and they are forced to do it.

No, the situation is NOT the same as glide. With Glide, if I wrote a game based on it, it would simply NOT RUN on other hardware. Moreover, if anyone tried to run glide on third party hardware, 3dfx would have their balls. Porting code for Glide to DirectX or OpenGL wasn't easy and basically resulted in a rewrite of the renderer.


With Cg, your shaders will still run, they'll just run slower. Moreover, the fact that they run slower ISN'T INHERENT IN THE DESIGN OF THE LANGUAGE, but merely a factor of the COMPILER, and anyone is free to WRITE THEIR OWN COMPILER to ALLEVIATE THE SITUATION.

The developer doesn't need to do any extra work rewriting the Cg shader, he just needs to switch compilers.

And no matter how much you attack Cg, the situation will be identical for OpenGL2.0 and DX9. Unless people produce good compilers for their platform, performance will suck. Since NVidia will have an optimizing compiler for Cg that generates low level NV30 code, and given that NVidia themselves *CLAIM* that Cg is 100% compatible with DX9 HLSL, this means that NVidia can take the same compiler and apply it to DX9 HLSL (and probably OpenGL 2.0 as well)


Therefore, NVidia can produce a tool with takes "standard" OpenGL2.0 or DX9 HLSL and produce optimized NV30 "direct to hardware" code as well and there is nothing you can do to stop them.

HellBinder said:
It just kills me how you guys are never held accountable for the things you say. I know what was said, I can pull quotes. No you can not change tune in mid stream and claim that you never said anything like that, or that somehow this is exactly what you were saying the entire time...

Go ahead and pull quotes then. Let's see it. Here, I'll help you

http://www.beyond3d.com/forum/viewtopic.php?p=16600&highlight=#16600

http://www.beyond3d.com/forum/viewtopic.php?p=25626&highlight=#25626

Oh yeah, I've changed my tune midstream alrite.

And with regard to Cg having any extensions over DX9 HLSL (say, it is a superset). This also isn't the same situation as glide. This is no different than the OpenGL extension situation or the vendor-specific C-language extensions right now. This is handled quite easily by the preprocessor. An extension function or extension keyword is a wholly different situation than trying to go from Glide to OpenGL or DirectX. The latter situation requires a near complete rewrite of your rendering code.
 
Guest said:
Trouble points remain :

- Should it have been there in the first place given HLSL and OGL2.0 ?

Hardware can't wait for standards given the schedules. Would you prefer that the NV30 and R300, with fully programmable pipelines, go unused for several product cycles while OGL2.0 HLSL is hammered out in committee? Would you prefer that neither NVidia or ATI produce any new silicon designs until the next spec is hammered out?

The way standards work is, vendors develop proprietary specs and products first, then iron out the differences in commitee and refine later. This is likely what happened with Cg. NVidia created programmable hardware 12-18 months ago. They needed a language. No standard language existed. They created one, Cg. Later, they worked with Microsoft to try and shoehorn it into DX9. On a parallel track, 3DLabs did the same thing. 3DLabs created the hardware first, and their own HLSL language. Later, they contributed it to OpenGL.

It's likely that in the first, Cg, DX9 HLSL, and OGL2.0 HLSL will merge when the market reaches maturity.

- The language is closed and NVIDIA owned: no non-NVIDIA features.

The language appears to heading towards MS/ARB "ownership"

- Compiler is not really open source (bypass, back-end ?)

The front-end parser and some optimization stuff on internmediate representation has been open sourced. The generic DX8/OGL backend is likely to follow soon.

Why would they opensource an NV30 optimization specific bypass?

- Bypasses the API in an unknown way which might cause instability and other problems.

For OGL, the bypass is a piece of cake with extensions, and NVidia is well known for great drivers. For DX, who knows how it will be done, but I can forsee many ways of doing it "nicely"

- Future versions of Language might contain NVIDIA only features which can not be compiled through the standard MS assembly, only through unknown bypass or OGL.

Once you have a high level turing complete language, how would you make it "even more powerful" that it can't be represented by DX9's general purpose shaders? The only possibilities I can see are new datatypes (double precision floating point types!??! eek) or a new kind of non-stream based shader that can access arbitrary memory locations. Even the "double FP datatype" issue could be handle via compiler tricks the same way BigInt's and BigDecimal datatypes are handled today in C/C++ numerical libraries.

Both seem unlikely.

- Which compiler does the developer include in its code ? One compiler that fits all or 10 different compilers for all the different IHVs ?

Developer's don't "include" compilers. They run the compiler as part of the build process. or, they use some standard API call in DX9 or OGL2 that invokes a run-time compile, in which case, the Cg compiler will be "installed" as a hook to catch this call.

- How many different Cg programs have to be written given the subtle differences between hardware and compiler limitations/bugs ?

Should be none. It's compilers that are the issue. If a compiler is buggy, use another one. I would never use a C compiler that generated incorrect code.

If there are syntax issues, they will likely be #ifdef's and #define'd away.


- What happens to Cg dialects ?


What happens to C/C++ dialects right now?

What C/C++ programmer isn't used to code like

#ifndef HAS_BOOL
typedef int bool;

...
#ifndef HAS_EXCEPTIONS
#define try ...
#define catch ...
 
OK, Time for some Facts...

1. Cg does not bypass APIs. Period. Any other implication is false. This has never been part of the plan.

2. Cg is designed to help developers. It really has no advantage for NVIDIA more than any other IHV, despite what NVIDIA themselves have claimed from time to time - ie "optimized for NVIDIA GPU's". This is not really possible, as it just spits out assembly code which is already optimized by the driver upon loading through the standard APIs.

3. Cg is not a conspiracy, although it has been spun that way by many people. Developers are using it, and like it. Hundreds of graphics and application developers have been trained on Cg at Siggraph and other events. They like it.

4. If there is a conspiracy, it's the 3Dlabs proposal, which they disingenuously called OpenGL 2.0, implying that it had some sort of endorsement or backing of the OpenGL ARB.

5. Cg is integrated directly into the upcoming versions of Max, Maya, XSI and other tools. That alone is a HUGE boost to the entire realtime 3d community - an artist can view & tweak hardware shaders in real time, in the tool they are already familiar with. The exact same shader can run in the tool & in the game engine. This was demonstrated at Siggraph and is currently in beta.

6. The OpenGL ARB unanimously voted to have Cg submitted as one of the shading language proposals for OpenGL 2.0. If things work out well, HLSL == Cg == OpenGL 2.0 shading language. One language for all.

7. Why use Cg instead of HLSL for DirectX developers? For Dx8 developers, Cg provides OpenGL & Dx8 backends. HLSL does not. For Dx9 developers, there's no compelling reason, although any developer that uses an OpenGL tool ( ie Maya or XSI ) is benefiting from Cg anyway, due to the Cg integration that allows the same file to run in the tool using Cg & OpenGL, and in the game using HLSL & DirectX.
 
I recommend u guys to post your questions on cgshaders.org forums, where you'll surely get a reply from cass (one of nvidia guys).
 
#1 Cg "the LANGUAGE" is not NV30 specific. It is nothing more than a general purpose C-like language with your standard complement of datatypes and flow control statements. I challenged you over and over to point out something hardware specific in the NV30 Cg language.

Well lets see is there a command in the Cg language that allows you to sample a texture map in the vertex shader ? If any instruction that can be supported by any non-NV30 hardware is missing it makes the Cg language NV30 specific. I am sure that other vendors will come up with functionality that the NV30 does not have and I would be very surprised if NVIDIA only functionality does not get added to Cg (say they directly support a noise function).

That Cg IS/WILL BE DX9 HLSL.

Have you ever considered that its actually the other way around ? Have you seen any endorsement from Microsoft about all the claims NVIDIA made about their co-operation with MS on the design of Cg ? Bit strange that if both these companies co-operated so closely on this that there are no Microsoft statements about how good this is... because maybe Microsoft does not agree that its a "good thing" ?

(in relation that OGL2.0 and HLSL should be used) Well, none of the other are available yet are they ?

Well no hardware is available either, and if Cg is based on-top of DX9 it will be stuck until then as well. Cg and HLSL will be available at the exact same time.

Why should Nvidia have to give away a fully working compiler ?
Imo, they don't have to do this at all. As long as they allow other people to make their own compilers.

Well lets see 3dLabs does and Microsoft probably does as well... so why not NVIDIA ?

NVIDIA has to convince its competitors, even more than developers, that Cg is something to go for, and without a good starting point competitors will simply ignore Cg. If anyone complains about performance or issues they will be told not to depend on Cg, this will cause trouble with developers and end-users. Who will be easier to blame ? ATI, and all the others for not supporting Cg or the developer who was tricked into using Cg in the first place ?

You usually don't include a compiler in the code.
Then how is the by-pass ever going to work ?

(How many different Cg programs have to be written given the subtle differences between hardware and compiler limitations/bugs ?)Probably none. But perhaps some

Well think again because even when you code in Cg you will still need to be very well aware of all the limitations imposed by the various levels of vertex and pixel shaders because otherwise it will not compile. If you believe that you can write one pixel shader and have it run on DX8/9/... hardware then you'll be in for a surprise unless you code for the lowest level: PS1.0/1.1 which has such a low instruction count that you probably can't even use Cg in a sensible way. Different VS/PS support different instructions, while most can be emulated - emulation takes up a lot of instructions which you do not have many of, in the lower revisions. So with or without Cg you still end up coding for all the different VS/PS standards and on top of that you'll have to scale quality because of different speed of different hardware.

Once you have a high level turing complete language, how would you make it "even more powerful" that it can't be represented by DX9's general purpose shaders? The only possibilities I can see are new datatypes (double precision floating point types!??! eek) or a new kind of non-stream based shader that can access arbitrary memory locations. Even the "double FP datatype" issue could be handle via compiler tricks the same way BigInt's and BigDecimal datatypes are handled today in C/C++ numerical libraries.

Think instructions, new functionality... or do you expect that the instruction set we now have will be enough for year and years to come? Say we get noise functions in hardware, where are the instructions for that ? Or do you expect noise to be completely coded using default instructions, recognised and then optimised by the compiler ?

What happens to C/C++ dialects right now?
Difference is NVIDIA won't allow dialects. Cg is not equal to C/C++

Should be none. It's compilers that are the issue. If a compiler is buggy, use another one. I would never use a C compiler that generated incorrect code.
What if it generates sub-optimal code ? What if the compiler is simply unable to squish the code into the limited VS/PS functionality ? Back to coding in assembler then I guess...

1. Cg does not bypass APIs. Period. Any other implication is false. This has never been part of the plan.

Strange since thats exactly what was said in a conference call mentioned at the start of this discussion ?!

3. Cg is not a conspiracy, although it has been spun that way by many people. Developers are using it, and like it. Hundreds of graphics and application developers have been trained on Cg at Siggraph and other events. They like it.

4. If there is a conspiracy, it's the 3Dlabs proposal, which they disingenuously called OpenGL 2.0, implying that it had some sort of endorsement or backing of the OpenGL ARB.

Hundreds of developers "attended" the presentations, there is a huge difference between attending a presentation/course and actually using it. They like the concept of Cg which is identical to the concept of HLSL and OGL2.0 - difference is that Cg was never open for discussion and might only be truly supported by one hardware vendor. 3DLabs made a proposal and has been asking for feedback, input and comments for a long time. When did NVIDIA do that ? Did they call Cg a proposal ? No, they declared it an "industry standard" all on their own.

For Dx8 developers, Cg provides OpenGL & Dx8 backends. HLSL does not. For Dx9 developers, there's no compelling reason, although any developer that uses an OpenGL tool ( ie Maya or XSI ) is benefiting from Cg anyway, due to the Cg integration that allows the same file to run in the tool using Cg & OpenGL, and in the game using HLSL & DirectX.

Why not call Cg a compiler for HLSL for OGL then ? Why declare it something seperate and new when its just the Microsoft standard ripped of and made their own ? Do you think HLSL and OGL2.0 will over time not get integrated with those tools ?

All NVIDIA has going for Cg is :

- Its first out of the starting blocks in an alpha/ beta form that only supports the basics and has a poor compiler (does not come close to generating optimal code according to ex-NVIDIA Richard Huddy on the DX Mailing list)
- Marketing push by one of the best teams out there.
- Allows sharing between D3D and OGL

G~
 
Back
Top