Cg released

gking said:
It's worth pointing out that "adding functionality" to DX often takes months after hardware is available because it *is* Microsoft controlled, and even then not all functionality is added, as Micorosft plays political games with hardware manufacturers. The register combiners in NV1x and NV2x series chips are far more powerful than what Microsoft exposed with texture environments and pixel shaders.

As always true standards take time to appear and given that they aim to have an open playing field with input from multiple parties the end result is never ideal for any of the parties involved. The issue is that with MS all vendors at least have an equal shot at getting something in the end result. With NVIDIA owning and controlling the Cg syntax there is no way that ATI, Matrox, PowerVR, or anybody can get something added against NVIDIAs will (no matter how sensible or good it might be - if its not in NVIDIA hardware it will not be in Cg which is obvious given the many statements in the documents release which point at NV30 functionality). And that is where Cg seem to fail at least IMHO...

I fear for the return of vendor specific solutions. We finally got rid of all the vendor specific APIs like Glide, now it seems like we'll be hit with a wave of vendor specific High Level Shading Languages. The same problem is/was more or less present in OpenGL where companies block standardisation of extensions by claiming IP on them, resulting in vendor specific extensions.

Developers do not like to support multiple APIs, nor do they like to support multiple extensions in OpenGL that do the same thing, and they will also not like multiple High Level Shader Languages. So lets skip on the vendor specific ones and make a serious effort on the ones that stand a chance of being a good open standard with influence from as many parties as possible - while minimising delays because the parties involved can not agree on how to proceed. Most probably its conflicts on the true standards and delays resulting from these conflicts that have opened the window for, or caused the creation of, Cg.

K~
 
gking said:
Yep, pretty much. HLSL syntax and Cg syntax will be 100% compatible in the final releases. Depending on profiles used in Cg, the supported features may be a subset _or_ a superset of DX9 HLSL.

So an NVIDIA subset or superset. I assume the superset will appear through the OpenGL extensions.

Now could all of this not be avoided by allowing DX9 HLSL to have an extension mechanism so that vendors could expose the full functionality, or since HLSL will also feature compilers just have the compiler be aware of vendor specific optimisations that are possible (for example macros that can be supported directly by the hardware rather that through a split up into subcalls).

All of this means that the way most people understand Cg is incorrect. If NVIDIA had promoted this as an NVIDIA specific version of MS DX9 HLSL I would have been a lot happier and developers would have had a much better view on what Cg really is and what it really should be used for.

So I expect other hardware vendors to create what in essence is "HLSL+vendor specific extensions" (for NVIDIA this will be known as Cg), the result will be passed to the hardware through some backdoor/hack or through an OpenGL extension.

Or is this still not the right way to understand what Cg really is ?

K~
 
gking, but that answer gets us right back to the trace function ... what use is it without a scenegraph API?

SA, thats nice in theory ... but without automatic multi-pass your shaders still have to be architecture specific, so yes Cg saves work but AFAICS it does not allow you to (always) use a single shader for multiple architectures anymore than DX8 does (which is to say that both will let you use a single shader as long as you can live with the lowest common denominator).
 
Kristof,
Developers today are building their own custom languages and compiler generators to achieve what Cg is doing. It is commonly done in every part of the software industry, just look at the Web and the proliferation of proprietary templating systems, scripting languages, and object frameworks. Half the websites out their wrote their own high level language.

The Aquanox guys said they have an internal API/language for generating pixel/vertex shaders. For Quake3, Carmack invented his own sort of mini-shader language. I'm sure many of the game developers out there do this. They've done it with game-logic (every game engine seems to have its own scripting language)

These are tools that programmers have to keep reinventing over and over. The really smart ones can do it themselves (Carmack et al). However, other people -- artists, non-so-great programmers, or resource-constrained programming teams need off-the-shelf tools.

Cg is just a command line tool for specifying your shaders in a separate file and generating VS and PS as a result which you can then reuse in your code. Cg doesn't bypass DX8 or OpenGL. It isn't GLIDE. The output is always code that you can hand-tweak yourself afterwards for any graphics card. And, the output is API neutral (DX or OGL)

I see Cg as nothing more than a CASE tool. It is no different than using Visual Studio Wizards, or UML tools, etc. It is a high level tool for modeling your shaders and generating code. This is super common in the software industry.

Even after OpenGL2.0 and DX9 come out, there will still be a need for even higher level languages. There will be a need for bridge wrapper APIs that can work with both OGL and DX9. Cg won't be it, but some company will do scenegraphs. Others will release a Physics Engine language/API, etc.


Right now, Cg fills a need, which is to have a tool which can interface with Maya/3D Studio/etc and generate vertex/pixel shaders for OpenGL and DX8 cards that exist *today* When OGL2.0 finally arrives, and hardware that supports it, Cg will disappear and something that is higher level will replace it.

Until then, developers need tools. They can either hand code VS/PS, or use C-like syntax and a code-generation tool. Me? I'm sick of assembly language, and I'd rather write X = A + B * (dot(C,D)) and have the compiler output the rest.


The way you've got to think of these things is that OpenGL/DX are the Virtual Machines that execute a stream of graphics operations. Cg is just one front-end to the VM. Just as with Java, there are compilers that can compile Fortran, Lisp, etc to Java Byte Code, there will be many many front-end scripting languages to the low level graphics "VM"

What do you think is gonna happen when GPU's become general purpose processors like CPUs? Do you expect all programmers to use the same language? When GPUs have 20 different front-end compiler tools, that's when the industry has matured.

I don't see having a plethora of languages and tools as being a horrendous situation.
 
This initial implemenation is the beginning of a much longer road. Before Renderman, things were fairly chaotic for offline 3d graphics. Renderman did an enormous amount to clear that early chaos.

HLSL and its implementations are more targeted to the needs of real-time 3d. Its still in its infant stages, so don't expect the world all at once. After all, there are many other deadlines to meet. It will certainly not satisfy all developers needs immediately, but that's to be expected.

It has been well thought through though and has received a great deal of input from many different 3d companies across all aspects of the 3d industry. I well understand the intent that only the profiles need to change to evolve the language. However, I do feel the language itself will also evolve to meet the needs of the industry. All languages do.

Nvidia certainly deserves credit for their key contributions. However, its important to understand that the high level shader language is an industry standard, not a proprietary one; with input, direction, and support from many companies.

At the moment, it is also somewhat of a high level tool and toolkit. However, just as Renderman and C++ are more than just high level tools, I see this as the early stages of a general purpose high level programming language for real-time 3d hardware.

I see GPUs taking on more and more general purpose vector processor characteristics. I see them becoming highly parallel general purpose processors, far more capable than future CPU floating point capabilities.

I see this happening for several reasons.

First, real-time photorealistic 3d graphics and physical simulation are undoubtedly the most computation intensive tasks required of consumer computers (or even many commercial machines).

Second, 3d chips get to specialize themselves for their computation intensive task. The CPU is designed to run word processors, spread sheets, web browsers, GUI operating systems, etc. 3d chips are targeted to perform massive amounts of parallel computation. This demands enormous transistor counts, memory bandwidth, etc. It simply isn't economical for the CPU to try to cover its own work add all the transistors and memory bandwidth to satisfy the massive computation demands of real-time 3d graphics and physical simulation.

As 3d chips become more programmable, they will offload more of the computation intensive tasks from the CPU. This includes all aspects of the physical simulation not just the optical aspects. They will be better suited for this than the CPU because they will be specialized for the task with the massive transistor counts and the memory bandwidth to accomplish it. In fact, CPUs are likely to shink in die size in the future to reduce costs with transistor counts leveling off, but 3d chips will have as many transistors crammed into them as they can realistically fit. At that point, 3d chips will really become general purpose, highly parallel, computation machines.

How does all this affect the CPU? Its hard to tell obviously. However, with its massive transistor counts and memory bandwith, a future 3d chip die could easily squeeze a CPU in one of its corners and it would hardly notice. However, the reverse is certainly not the case. The rest is speculation.
 
DemoCoder said:
Kristof,
Developers today are building their own custom languages and compiler generators to achieve what Cg is doing. It is commonly done in every part of the software industry, just look at the Web and the proliferation of proprietary templating systems, scripting languages, and object frameworks. Half the websites out their wrote their own high level language.

I agree but this is tailored to the developers needs and means good coding practise. If there are various different HL languages with different advantages for different hardware they might still need to write their own scripting, templating tools to maximise efficiency/performance out of different hardware. If however there is one or two serious HL langauges then they might not need to.

Cg is just a command line tool for specifying your shaders in a separate file and generating VS and PS as a result which you can then reuse in your code. Cg doesn't bypass DX8 or OpenGL. It isn't GLIDE. The output is always code that you can hand-tweak yourself afterwards for any graphics card. And, the output is API neutral (DX or OGL)

The moment it outputs to IP-ed (not free implementable) OpenGL extensions it is in essence Glide. And while the output is API compatible its tailored to specific hardware in instruction ordering, instructions available, register count etc as long as there are no other compilers.

I see Cg as nothing more than a CASE tool. It is no different than using Visual Studio Wizards, or UML tools, etc. It is a high level tool for modeling your shaders and generating code. This is super common in the software industry.

That is what we want, but as long as such a language its not accepted as a true standard by the industry you are stuck with a dialect something that does not work properly with all hardware. I guess in a sense things like this happen with CPUs as well where Intel introduced SSE and AMD introduced 3DNow! and compilers should support both but might not do...

But I agree that Cg has an offline use as long as the output is not used directly without checking/optimising/etc.

Right now, Cg fills a need, which is to have a tool which can interface with Maya/3D Studio/etc and generate vertex/pixel shaders for OpenGL and DX8 cards that exist *today* When OGL2.0 finally arrives, and hardware that supports it, Cg will disappear and something that is higher level will replace it.

Agree and I don't think I said the principle behind Cg its bad, just the way its being launched as the great thing for years to come that changes the whole industry... its a first small step.

The way you've got to think of these things is that OpenGL/DX are the Virtual Machines that execute a stream of graphics operations. Cg is just one front-end to the VM. Just as with Java, there are compilers that can compile Fortran, Lisp, etc to Java Byte Code, there will be many many front-end scripting languages to the low level graphics "VM"

The issue I have is that non of the languages you mention above are limited to a hardware company that can grab a direct competitive performance advantage from this. Fortran, Lisp, etc are standard languages AFAIK introduced through accademic work, not by a hardware company that tries to get developers to output code tailored and optimised for its hardware and shield competitors from adding functionality. And as soon as Cg has compilers for all target platforms its fine with me, question is will that happen given that the syntax is controlled and limited by a competitor ?

I don't see having a plethora of languages and tools as being a horrendous situation.

Talk to driver/tool developers and they will tell you its horrendous, and for developers as well since they will not know what to select and if the output results in different performance against different target hardware it will be a true mess. I as a developer do not want to support NVidia HL, ATI HL, Matrox HL, PowerVR HL... I want to Support Standard X and have a compiler that generates code for NVIDIA, ATI, Matrox, PowerVR. You can have multiple standards but please not 20 of them ! In some sense most of these 20 will be some kind of dialect of each other... like Cg already seems to be a dialect of MS Shading Language.

Look I completely agree that we need high level languages and that not bothering with specific asm commands and registers and other low level stuff is great and something we need. I agree that Cg is a first step and something developers can play with. I fully agree that Cg can be used to generated some shaders that can then be hand tweaked or improved or scrapped as rubbish. What I don't want to see is that this turns into something a lot of developers use and then turns into a Glide situation where to make use of all Cg functionality Cgs compiler needs to output to OpenGL Extension that are NVIDIA specific and that no competitor can support. As long as Cg does not have multi target compilers (I believe there are shaders and shaders) its going to remain of limited use. Its a first but please not declare the first as the ultimate and only standard. So I agree with a lot of what you say and probably fully agree with how you can use it today but I am looking a bit more long term.

I'll be curious to see if we'll see more vertex/pixel shader supporting games because of Cg OR because the hardware that can do them will actually become mainstream...

K~
 
gking, but that answer gets us right back to the trace function ... what use is it without a scenegraph API?

When trace is something that can realistically be run on GPUs, then adding a scenegraph specification library that sits on top of Cg (sort of like how RIB sits on top of Renderman SL) will be necessary. Until then, adding the feature doesn't make a whole lot of sense, since we'd likely end up with an implementation that doesn't map as well to available hardware as it could, and its completely untestable, anyway.

It's not as if programming languages haven't seen revisions before. C and C++ have both gone through numerous revisions (and significant changes) since "standards" were defined.

like Cg already seems to be a dialect of MS Shading Language

I don't think this is a fair statement. It was obvious that higher-level control structures were necessary if shaders were going to become a common (even expected) feature in games, and it's not really surprising that multiple companies were independently able to design languages that bared more than a passing resemblance to each other (Cg, HLSL, and OGLSL).

The moment it outputs to IP-ed (not free implementable) OpenGL extensions it is in essence Glide

So are you arguing that chip developers should avoid adding new features into hardware until some standardizing body decides that everybody can have that feature? Is the graphics industry supposed to be some Communist collective? Do you realize how slowly the ARB reacts to anything? I don't want to wait 6-12 months before I can program the features of the graphics card I bought -- I want to do it as quickly as possible.

And while the output is API compatible its tailored to specific hardware in instruction ordering, instructions available, register count etc as long as there are no other compilers

You do realize that vertex shaders you write in DX8 VS1.1 aren't necessarily the vertex shaders run by hardware, right? The output is the same (or it should be, assuming the conversion is bug-free), but the resulting hardware vertex shader is better optimized for the specific target platform. This conversion/optimization happens automatically in the driver, without programmer intervention.

That is what we want, but as long as such a language its not accepted as a true standard by the industry you are stuck with a dialect something that does not work properly with all hardware

The way ANSI C was created wasn't by a bunch of academics/industry folk designing a language over dinner -- 4 or 5 different companies released languages that were similar but incompatible. Over the next 10 or 20 years, the programmers of the world divided the good and bad features of each implementation, and then debated which should be included in a language. The only way a true standard will emerge will be for all the competing shading languages to duke it out, and the best features of each will be melded into a similar, but different language.

And if you're going to argue about standards, please choose a different company than Microsoft to rally behind. VS.NET still manages to break a number of requirements of the ANSI C and C++ specifications, and their unique interpretations of network and security protocols leaves a lot to be desired.
 
So are you arguing that chip developers should avoid adding new features into hardware until some standardizing body decides that everybody can have that feature? Is the graphics industry supposed to be some Communist collective? Do you realize how slowly the ARB reacts to anything? I don't want to wait 6-12 months before I can program the features of the graphics card I bought -- I want to do it as quickly as possible.

Yes I do, I don't see ATI, Matrox, 3Dlabs in such a hurry to shove their own compiler down developers throats. You speak as though everyone owns a Nvidia card, unfortunatley for you not everyone does and there is more than one graphics card company in the PC industry.
Why would I want to purchase a game that was developed on CG if I don't own a Nvidia card, whats it doing for me...from the .pdf it shows Renderman doing the phong shader in 3 lines of code vs. Cg's two lines of code, that isn't a huge difference ??
To even state 'communist' in that sentence is pathetic, part of a democratic world relies on boards ( consider any democratic goverment )to ensure not ONE side is being heard.


I don't want to wait 6-12 months before I can program the features of the graphics card I bought -- I want to do it as quickly as possible.

Who doesn't want HLSL graphics quicker, but one when one company is marketing one specifcally optimized for their cards only, how does that improve the graphics industry in a whole and why do we need three or (FOUR including Renderman) standards.
 
Doomtrooper said:
Who doesn't want HLSL graphics quicker, but one when one company is marketing one specifcally optimized for their cards only, how does that improve the graphics industry in a whole and why do we need three or (FOUR including Renderman) standards.

Gotta agree with you in general, I'm dissapointed about the lack of openness in Cg's aproach, from the current lack of feedback it seems pretty reasonable to assume other companies like ATi or Matrox won't go through the pain and develop their own optimizations for Cg, they'll probably either just sit it out until MS' HLSL or OGLHL come around or release sothing of their own. As long as its only Nvidia I don't see a whole lot of developers really using Cg to its fullest in development either, which would kind of defeat the purpose of introducing such a language now, and calling it a standard is a joke in itself ...
I guess it can still be used as a practice (following languages will probably be similar enough, especially DX9 HLSL) and a tool to generate shader code to tweak by hand, which is kinda usefull but far from being revolutionary.

As for the different standards, I think there are going to be even more than just four around. Don't forget Stanford's shading system which I would actually switch against renderman in your argument, as renderman is plainly not suited for realtime graphics and thus is not gonna compete against Cg, HLSL, OGLSL or whatever else gets thrown into the mix in the future, whereas Stanford's system probably will.
I still don't see why to make such a negative fuzz about it though, despite its lacks and letdowns I don't see Cg as being a bad thing, just less than it could have been. Some companies always have and will continue to push ahead with what they think is the right way to do something, not waiting fot the industry as a whole because the standardizing bodies often are far too slow and cumbersome, its a common way to advance the industry and happens all the time. Things apear, innovate a little in their time and dissapear, I'm pretty sure Cg will fall into that cathegory. How long has it been since hardware supports programmable vertex and pixel shaders and other things, that even today are only exposed in vendor specific extensions in OGL? When will OGL 1.4 and 1.5 finally be approved? For comparison just look at the internet, which could almost be considered a worst case scenario IMHO - HTML didn't evolve fast enough, so both Netscape and Microsoft started introducing new tags and tried improving over existing tags and features, but only their specific browsers supported these. That caused incompatibilities and additional work for the webmasters, but in the end resulted in better looking websites, sooner. Sure, all those features were on the list of things to be added in the next HTML revisions, but those were taking ever longer and longer to get aproved, so companies got sick of waiting - concerning the negative impact (incompatibilities etc.), things aren't nearly going to be as dramatic in the graphics market IMHO, at least Nvidia seems to be trying to stick with already existing standards (DX, OGL), even if not directly supporting their competitors (PS1.4 or ATi OGL extensions), but who seriously expected them to?

Future languages will pick what they like from Cg and the others following, after a couple years we might even end up with just one standard, who knows ...
 
Gollum said:
from the current lack of feedback it seems pretty reasonable to assume other companies like ATi or Matrox won't go through the pain and develop their own optimizations for Cg

The problem is, I think there is no method for them to be able to do this. When I asked Kirk if they would allow othere vendors to create their own compilers the answer was basically "why confuse things even further".
 
I don't see ATI, Matrox, 3Dlabs in such a hurry to shove their own compiler down developers throats

And if they were, you might see developers using ATI, Matrox, or 3D Labs hardware more often. Developers have been complaining about the complexity of creating shaders (and managing the artpath for programmable GPUs) since they were seeded with NV20 hardware. It's about time a professional-level tool were available.

To even state 'communist' in that sentence is pathetic, part of a democratic world relies on boards ( consider any democratic goverment )to ensure not ONE side is being heard

Right, and arguing that companies shouldn't add features before a standard exists (i.e., other companies have it, too) is a form of communism. Standards are created out of the marketplace -- not a committee. There may be 5 or 6 different shading languages for 5-10 years until the market has gained enough experience in deciding what makes a good real-time shading language that an official standard can be created.

Designed by committee is not the same as a standard.
 
gking said:
I don't see ATI, Matrox, 3Dlabs in such a hurry to shove their own compiler down developers throats

And if they were, you might see developers using ATI, Matrox, or 3D Labs hardware more often. Developers have been complaining about the complexity of creating shaders (and managing the artpath for programmable GPUs) since they were seeded with NV20 hardware. It's about time a professional-level tool were available.

To even state 'communist' in that sentence is pathetic, part of a democratic world relies on boards ( consider any democratic goverment )to ensure not ONE side is being heard

Right, and arguing that companies shouldn't add features before a standard exists (i.e., other companies have it, too) is a form of communism. Standards are created out of the marketplace -- not a committee. There may be 5 or 6 different shading languages for 5-10 years until the market has gained enough experience in deciding what makes a good real-time shading language that an official standard can be created.

Designed by committee is not the same as a standard.

The PC industry is not a console, and just like AMD has done for processors there should be choices for consumers.
Having optimized code for certain games on certain hardware is a backwards step today, developers need a standard language for all hardware.
Its much different when talking consoles as their is no other players, and I still stand by my statement that Microsoft and the ARB will do a much better job..we will see.

Edit:

Adding features is fine, just make the the extension non proprietaty and all things are well, remember what the Open part of Opengl originally stood for ;)
 
This sounds like the way Java is controlled. Mainly its controlled by one company which is Sun however, other companies can make their own version as long as its based on some standard.

What is the big deal? Nvidia is opening it up for compilers for ATI and other companies. Sure other companies have to create their own compiler, but so what.

Some of you guys are freaks, I mean you are so scared about Microsoft controlling everything and now you are are scared of Nvidia as well.

Why don't you just live underground in Montana and be scared the rest of your life.

If it was ATI or anyone other than Nvidia or Microsoft that made the same anoucement a lot of you would welcomed it.

Life is too short to bitch all the time. Get a wife/girlfriend.
 
Docwiz said:
Nvidia is opening it up for compilers for ATI and other companies.

Where does anything say that? That sounds completely at odds with what David Kirk told me when I asked if other companies were free to make thier own compilers.

Life is too short to bitch all the time. Get a wife/girlfriend.

I've asked you to contribute something useful before - this isn't it.
 
Well, let me post some questions to you all who are concerned about this:

If Cg outputs generic code that is compatible with all DX8 hardware, is this any worse than what DX's HLSL will do when it's released? Or are you all expecting Microsoft to implement a compiler that produces optimized code for every video card?

What about OpenGL, same situation?

Why is it OK to have two languages (one for DirectX, and one for OpenGL), but not five or six?

What is there to prevent ATI and Matrox from creating their own "Cg" variant that uses the same syntax and compiles optimized code for their cards (i.e. so that you can copy/paste your code from NVIDIA Cg into ATI Cg, so you only have to write it once)?

How "open" do you think OpenGL is anyway? Last time I looked at the ARB list, it consisted entirely of companies that make graphics hardware, Microsoft, Apple, Intel, and a few OEM PC companies (what the hell do Dell and Hewlett-Packard have to do with OpenGL anyway?). No consumer interest parties, and no game developers. All corporations. IMO that is essentially nothing more than a collection of dictatorships.

If you really want there to be one true standard, who is going to make it? Microsoft? The ARB? A grad student at some university? An independant third party company that will charge you for it (someone like Codeplay)?
 
If Cg outputs generic code that is compatible with all DX8 hardware, is this any worse than what DX's HLSL will do when it's released? Or are you all expecting Microsoft to implement a compiler that produces optimized code for every video card?

There are 2 main issues AFAIC:

1) I expect MS (and the GL ARB wrt OpenGL) to give more or less "equal" access and input to the compiler and the shading language itself. As opposed to one hardware vendor controlling the language completely, and "questionable" access to the compiler. This gives the controlling hardware vendor an unfair advantage. (Meaining -- they might not have a superior product / technology, but artifical barriers prevent other companies from competing on the same level)

A cery clear example is CG's implementation with OpenGL. CG will generate code that's "compatible" with openGL 1.3, that wil run on "any" card. But it only outputs pixel shaders for nVidia hardware extensions. Yes, I think that is "worse" than what Microsoft's or the ARB's compiler will ultimately do.

2) I think it's more important to have ONE compiler that may not be "as" optimal for each piece of hardware, vs. several compilers. I think the compatibility issues when several compilers enter the picture outweighs the advantages. I already hear so many complaints about "different vendors having different driver issues." This would be compounded immensely, IMO, if we add a layer of different compilers into the mix.

Docwiz:
Life is too short to bitch all the time. Get a wife/girlfriend.

I've got a wife, a two year old, our second child due in about a week, and we're closing on our new house next month.

Life's to short to bitch about bitching, don't you think?
 
My issue is simple, I don't own a Nvidia card..if CG takes off and devlopers optimize for Nvidia only then might as well throw the competition out the window as who is going to buy a video card that runs slower and can't produce the same effects due to a optimized engine on one brand of card..if you value choice then its a concern, if you wish to see one company dominate the PC game industry then I guess its ok.
I really tried to keep open minded on this until I saw Gkings comment:
It's worth pointing out that "adding functionality" to DX often takes months after hardware is available because it *is* Microsoft controlled, and even then not all functionality is added, as Micorosft plays political games with hardware manufacturers. The register combiners in NV1x and NV2x series chips are far more powerful than what Microsoft exposed with texture environments and pixel shaders.

Again there is a unfair advantage being given to only ONE manufacturer in this scenario as I read his quote this compiler will expose more performance on Nvidia boards only and from what I read creating your own profile to optimize for another GPU will not be easy.

There needs to be some form of referee here otherwise the big money graphic card companies will have developers being pulled one way or the other much like the old glide days (my shader is is better..no my shader is better)...as much as liked glide, classic example Unreal Tournament..engine was software, glide, metal, D3D and OGL :rolleyes:
 
There's nothing stopping game developers from only optimizing for NVIDIA cards as it is, so if your paranoid fears of a single company dominating the market are correct, it probably began long before Cg was announced. I also don't think anyone honestly believes Cg is going to be an industry standard in its current form and without support from other hardware manufacturers, but that doesn't make it useless. Personally, if I were a game developer, I'd be wondering why ATI hadn't provided me with a similar program (or at least an announcement that they would be providing their own optimized compiler/profile/whatever to work with the Cg front end).

And considering it's been stated many times that John Carmack wrote his own shader language, I don't see why he would care about Cg. Take comfort in the knowledge that he won't be amongst this throng of NVIDIA-only game developers you fear will take over the industry :)
 
I'm not worried there is always the consoles which are starting to become more and more attractive with the politics in PC gaming anymore.
I do think that one of the finest programmers in the world would say 'something' about it 'if it's all that' :-?
 
Back
Top