Battle of three HLSL : OpenGL 2.0 / DX9 / Cg

Doomtrooper said:
You can harp all you want Chalnoth, yes ATI is the only one that supports PS 1.4, yet it is still a part of DX 8.1 and should not be left out of support if CG is supposed to be DX friendly..

There is no reason Cg cannot output PS1.4 assembly, its just that ATI needs to write the profile for it. Why should NVIDIA write a profile for PS1.4 if none of their hardware supports it?
 
Um, no. nVidia produced HLSL. Microsoft just implemented it.

Is that so. :rolleyes:

I wish you people would get your stories straight. Cg=HLSL! No, wait Cg!= HLSL, but they are compatible....

So now it's MS took Cg and "implemented it" as their HLSL? I was under the impression that they were different, although there was some work together to ensure some level of compatibility. Perhaps this is what gave me that impression:

http://www.cgshaders.org/articles/interview_nvidia-jul2002.php

q: Cg has been said to be compatible with DX9's HLSL, but does that imply that Cg is potentially a subset of DX9's HLSL, or are they effectively the same language?

a: Cg is not a subset of Microsoft's HLSL. NVIDIA and Microsoft have been working to create one language, but since we both started with products created separately, we're both moving toward that compatibility from slightly different directions. After the gold release, we expect that Cg programs and HLSL programs will be fully interchangeable.

Because changing syntax would fragment the language, which would bring us back to when software developers needed to write things entirely differently on different hardware.

You mean, bring us back right to where we are now with GL extensions? You still haven't asnwered my question though. Why should nVidia control it? They have the best answers?

Well, they're going to be supporting HLSL in DX9 (they don't have a choice), why not make their own compiler for OpenGL as well?

Probably because the only GL games coder of significance has already committed to GL 2.0? So whatever path the ARB decides to take, ATI will follow.
 
Cg was basically a marketing tool designed to encourage developers to write complex shaders--not in itself a very sinister goal. nVidia thought that their nV3x products would be unique in their ability to run complex shaders at acceptable performance levels, so having widespread adoption of Cg by developers would encourage gamers to buy nVidia cards to see the latest bells and whistles. I don't think nVidia had any agenda besides getting a HLSL out to developers and gamers' mindshare sooner than either DX9 or OpenGL 2 would.

However, nVidia didn't execute and ATI did, so ironically the best card to see complex shaders in action for a 6 month span is an ATI card. There is no question but that developers producing DX9-class games will have to take ATI cards' capabilities into account. This has made Cg kind of irrelevant, although I think it might serve ATI to produce a nice Cg backend to support their card with OpenGL.

Cg is a weapon designed to fight a battle that has been lost on a different battlefied.
 
OpenGL is more important than Direct3D because it is more widely used (it is portable and it is used on almost all platforms). It survived this long not only because John Carmack uses it but because IHVs can add extensions and evolve API in a way they see fit.
ARB boards role is to judge those new additions.
OpenGL ARB board is IMO doing the right thing, they may be slow but they are doing it right. For adding extensions to the core GL or promoting to ARB status ARB board looks at all available extensions and choose best or merge similar extensions to new one.
Extensions are great way of innovating OpenGL, if one IHV screw up one new functionality, other IHV can do it better, so when adding new functionality to core GL ARB have more options and more working implementations of same(or almost same) functionality.
If you look at last ARB meeting notes at www.opengl.org you can see that ARB board is trying to take best elements of Nvidias Cg and 3Dlabs glslang for high level language(there are more firms participating and ATI is leading this group). This takes more time but at the end new implementation is better for everybody and it can last longer. Microsoft usually don’t get things right first try. Direct3D got better from version 7 when they made it look more like GL, and Microsoft made mistake with DirectX 8 when they licensed vertex programs from Nvidia, but Microsoft needed some new buzzwords. With Microsoft it is all about marketing.
What all this got to do with high level languages?
At the moment it is better that there are more options because every one can have its good and bad points. I think that some sort of final high level language is going to happened when we get unified vertex and fragment programs in hardware from more than one IHV.
 
antlers4 said:
Cg was basically a marketing tool designed to encourage developers to write complex shaders--not in itself a very sinister goal. nVidia thought that their nV3x products would be unique in their ability to run complex shaders at acceptable performance levels, so having widespread adoption of Cg by developers would encourage gamers to buy nVidia cards to see the latest bells and whistles. I don't think nVidia had any agenda besides getting a HLSL out to developers and gamers' mindshare sooner than either DX9 or OpenGL 2 would.


If Cg is a marketing tool designed to make people write longer shaders, then what the hell is OpenGL 2.0 and DirectX9 HLSL? Are these also evil marketing ploys by 3DLabs and others to do the same thing?

I mean, are you seriously suggesting things happened like this:

Marketing Guy: We need a way to force people to write long shaders.
Marketing Guy #2: Let's get the tech guys to invent a language to make writing long shaders easily.

Tech Guy: (talking to marketing guy) Ok, so you want us to invent a programming language and compiler so that we can lock developers into writing longer shaders and hopefully our hardware. Ok, got it.
Tech Guy #2: Ok, I did what the marketing guy told me. I went and hacked up a C parser real quick, removed some stuff, and made a compiler for DX8, ARB, and our own NV30 extensions.

Marketing Guy: Thanks, now can we get developers to write NV30-only stuff, Muahahah!

In this scenario, the Cg concept was invented by marketing strategy, and didn't come from engineers trying to come up with innovative ways to program ever more complex hardware.

You see, I would suggest that it happened more like this. NVidia's NV30 and ATI's R300 are the natural evolution of a trend towards more programmability. As CPUs/GPUs get more complex, the apps that you can write become more complex, and the details of the pipeline timings become too much for most assembler programmers to juggle.


The logical result of this trend is to design high level languages. It would suggest that when NVidia started designing the NV30, no HLSLs for real-time cards (OGL/DX) existed in the public market place. A group of software engineers got together and designed a language to program more complex shaders. This was a prototype research project for awhile, and finally got elevated in status, given a marketing trademark (Cg), and Nvidia started evangelizing it.

It just so happens that other companies were also working on their own HLSLs, 3DLabs and Microsoft. There are probably more out there that we don't know about at this moment. None of them were in particular, creating the HLSL languages for a specific anti-competitive purpose. These languages are the natural result of general purpose programmable hardware coming into existence

You are going to see MORE languages being invented in the next few years, not less. Programmers LOVE inventing new languages that's why there are so many of them.

I myself think Pure Functional Programming fits better with GPUs vs C-like languages, because GPUs can't modify external state during shader execution, and can only return a final "result" at the end of the program, which nicely parallels pure functional programming. Thus, I think a pure-functional Lisp or Scheme type shader language would actually be better and there are more optimizations you can do, because function programming permits the ability to use equational reasoning, much like symbolic Mathematica-like math packages.

Some other folks might think that stream-based languages might fit better. And still others might think concurrent languages are a better fit.


High level languages goes beyond the mere small world of ATI vs NVidia, and the idea that engineers at Nvidia are designing these things because they want to play anti-competitive games, and not because they want to build something technically cool, is well, I think absurd.

I've been in the industry for almost 2 decades now, and I have worked at several large "evil" corporations. The technical guys who work in the engineering departments were just as geeky and clueless about marketing as the average d00d on say, Slashdot. Most of them had huge egos and cared more about beating other engineers or doing something technically or insanely great to get some R-E-S-P-E-C-T and had little concern for grand conspiracy theories.

I mean, do you think OpenGL Guy is sitting around thinking about ways he can use OpenGL drivers to disadvantage NVidia, or the way they can do political maneuvering at ARB to hurt NVidiai, or do you think he is more concerned with optimizing OGL Driver programs, inventing the next great cool OpenGL extension, or fixing bugs because the IHV/ISV's are breathing down his ass with nasty emails? And if he does invent a cool new OpenGl extension that Nvidia hardware can't implement, and he lobbies developers to use it, is he deliberating trying to be anti-competitive, OR, is he just trying to get people to utilize the cool work he has done?

I think all too often, people impute malice and ill-will where there was none.
 
RussSchultz said:
There is no reason Cg cannot output PS1.4 assembly, its just that ATI needs to write the profile for it. Why should NVIDIA write a profile for PS1.4 if none of their hardware supports it?

NV30 will support PS 2.0..PS 2.0 is backwards compatible from 1.1-2.0...so their hardware will support it..As for Nvidia writing a profile well quite simply on their FAQ they state DX8 not Dx 8.1 so that is their loop hole, although CG is porting inferior code due to the lack of PS 1.4 support...as seen on the Cg shader forums..proprietaty extensions and lack of PS 1.4 is a hot topic ;)
 
Evildeus said:
Doomtrooper said:
I don't like companies that try to monopolize a market
ROFLMAO, what do you think is the target of ANY firm? :LOL: You don't like any firm then, oups Ati is a firm :rolleyes:

Yes a company tries to gain Market share, there is a fine line with competetive business practices (anyone remember the original Athlon launch where the board manufacturers were threatened by Intel if the made a AMD board..in fact I recieved my 1st shipment of Asus K7M's in white box's with a very small manual containing jumper settings...no identifcation on the board, manual or box...no bios support for months)...it was the only board on the market for months..if people endorse that kind of practice be my guest.

The same applies here..at the time of the CG launch Nvidia was making record profits and owned 60% of the desktop market...a strong force trying to take more. IMO CG is the same thing as above, they are trying to steal developer support by seducing them with their own HLSL...I don't want to hear about this BS what would be using now if it wasn't here as the Dx8 user base still is not sunstancial enough for us to have shaders tomorrow in all titles, there was software out there prior to CG and tools like Rendermonkey also help...DX9 HLSL is around the corner and is OGL 2.0.
All the CG lovers out there would take a second look if it took off, coding only for Nvidia cards and the other players going under to due to no one buying their hardware..yes it could happen...Neverwinter Nights 2 you need a Geforce 3 to see water and spell effects, Unreal 2 needs a nvidia card to see Pixel Shader effectts..who is going to buy the card that doesn't show these effects..and Proprietary OGL extensions ensure that.
These same people that love CG can also enjoy paying rediculous amounts of money(as if they are not expensive now) when one company would dominate sales not ever having any real challenge from a competitor.
If people think CPU prices are low beause Intel wanted to be nice, they need their head examined, AMD has made the processor upgrade affordable again..and In the ideal situatuon 2 or 3 companies battling for market share means good things for us..not one.
 
No, Cg could not put other vendors "under", because anyone can implement a Cg compiler. Let's take the hypothetical scenario that 99% of developers are writing Cg shaders, yet Nvidia won't produce a Cg backend that can optimize for the R300.

Well, the Cg language grammar is public information, and so is the runtime. So ATI could simply implement their own compiler. Moreover, ATI could take Nvidia's source and skip most of the work. Finally, ATI could even FORK Nvidia's Cg Compiler, and if ATI produced a better implementation, they could actually steal Cg from Nvidia with most developers using ATI's implementation.

Think this situation can't happen? Both IBM and Microsoft stole Sun Microsystem's thunder when it came to Java. For many years, between JDK1.1.4 and Java 1.3.1, both IBM and Microsoft had way faster implementations, most developers preferred to use and deploy against those VMs, to such an extent that Microsoft was able to get able with extending the Java language and had to be taken to court by Sun to stop them, even though Sun technically controlled the spec, and the initial implementation, they had the worst implementation. If you download any apps during those times, it came with README files that urged you compile/run with IBM or Microsoft.


The hypothetical control of the syntax of a language doesn't do JACK. In theory, ANSI/ISO dictates what goes in C/C++. But developers tend to use what works best, and as a result, on x86, there are oodles of extensions to C/C++ and many programs today rely on those extensions, especially when it comes to Windows programming.


I'm telling you, you have the chicken and egg reversed. There is no way the requirements for Cg the language came out of marketing or business strategy just as Glide didn't. It came from the need for software developers at Nvidia to create something to make it easier to develop software for the next-generation hardware. Plain and simple. Nvidia has shipped several developer tools. Do you think NVASM which compiles DX8 shaders and OpenGl shaders is part of an Nvidia master plot?

The fact of the matter is that TODAY there is no tool that can compile DX9 HLSL into OpenGL. You can talk all you want about OpenGL 2.0 SL, please wake me up when OpenGL2.0 is ratified as a standard and 2.0 drivers are shipping. Nvidia simply did the natural thing and produced a CROSS COMPILER. If Nvidia didn't do it, someone would have.

And like I said, but you ignore over and over and over. There exist *THOUSANDS* of programming languages today. There is never going to be a single shading language for writing 3D apps on 3D hardware. There won't even be two languages. Never going to happen. Very shortly, there are going to be dozens of them.
 
Democoder said:
I think all too often, people impute malice and ill-will where there was none.

Does saying that something is a marketing strategy necessarily imply that it is evil or even technically clueless?

Anyway, I'll rephrase. Cg, the C-like HLSL originated by nVidia engineers is not a marketing strategy. Cg, the trademarked, nVidia-controlled HLSL promoted as a future industry standard outside the auspices of DirectX or OpenGL is a marketing strategy (in much the same way that Rendermonkey, along with being some neat technology, is a marketing strategy).

When people talked about the battle between the three HLSL, I thought they were talking about them as marketing strategies.
 
I don't see as it as more of a battle than the battle between Java, C#, C/C++, Delphi, SmallTalk, Visual Basic, Eiffel, Ada, or even Cobol. Hell, in just the web programming market we have probably a dozen major language players: perl, python, php, asp, jsp, tcl, c/c++, cold fusion, webmacro/velocity, webobjects, javascript, rebol, etc and each of them has a significant developer community.

In the past, languages have continued to evolve, split off, and the developer community keeps branching. It's simply not enough to have one language, as different people have different syntax preferences and differing views toward language analysis. Like I said, I think functional syntax is more of a match and more elegant, but other people find functional syntax fustrating.


I mean, if you point is "NVidia is trying to attract developers to their software, and hence buy/developer their hardware", well of course! There is no company is inexistence that isn't trying to get more customers. Even amongst identical identical languages (two vendors both offering a C compiler) they are trying to "one up" each other and they often introduce extensions or other features into the platform or libraries to do it.

But this assertion is trivially true, so I don't see any new information contained in it. What I do think is silly is the ATI flame group sweating bullets over Cg. Microsoft is poised to take over ANSI C++ at this point with .NET and even hurt Java, but I'm not sweating. It's just one in a long line of languages I've had to learn over the years to do my job. About every 5 years there is some "new big thing" and everything I learned before hand to have marketable skills was out the window, and I throw out all my old code libraries.

Ain't no big thing. Tommorow, there will be yet another language to develop in.
 
Yep, Haskell is "pure functional", but I'll probably rip out alot of the unneeded stuff to get something simpler. The gotcha is that texture fetches are skin to I/O or external state, so you'd have to model that using monads and continuations.

What I would be aiming at is to have a symbolic description of the lighting being used, such that the compiler can make the best choice about how to evaluate it given the precision requirements, and underlying hardware performance. I realize this may border on having a theorem prover in the compiler, but imagine a compiler smart enough to know that using the reflection vector is the same as using the half angle vector. Or knew about trigonometric identities, and could use Newton-Raphson automatically for approximations. I want the compiler to be able to make the same kind of sensible high-level trade-off optimizations that shader hackers do today.


So one step is raising the bar on equational reasoning. The other step is to reign in the language a little bit so that programmers are forced to write in a way that gives the compiler more of an idea about the contents of registers, so that it can make intelligent decisions about register shading, which components have 0 or 1 in them, etc.

Granted, you don't need functional syntax to do this, and procedural languages can have this kind of optimization done, it's just harder and the compiler often has err on the side of conversativism when it comes to guess the value of a register.
 
I myself think Pure Functional Programming fits better with GPUs vs C-like languages, because GPUs can't modify external state during shader execution, and can only return a final "result" at the end of the program, which nicely parallels pure functional programming. Thus, I think a pure-functional Lisp or Scheme type shader language would actually be better and there are more optimizations you can do, because function programming permits the ability to use equational reasoning, much like symbolic Mathematica-like math packages.

Yay! I'm not the only one who thinks this!

There won't even be two languages. Never going to happen. Very shortly, there are going to be dozens of them.

Heh no kidding I can rattle of a few that have been around before all this Cg hullaballoo (e.g. pfman, ISL, ESMTL, etc...)... With today's hardware and with what's coming in the future, one could probably just lump Renderman in there as well...
 
Yay! I'm not the only one who thinks this!

Make that 3 of us......

The problem though is going to be selling a functional language to the general development community, and I just don't see that happening.

Personally I'm rather fond of the Miranda syntax.

Although any functional shader language implementation would probably have to be sans lazy evaluation, the opportunity for function level optimisation is certainly intriguing.
 
The problem though is going to be selling a functional language to the general development community, and I just don't see that happening.

Me neither... Especially when it's often referred to as one of the "bad" experiences about CS students' first years... :(
 
Back
Top