Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
DemoCoder said:
What you say is true Simon (you can use inlining to undo CSE), but that wasn't the point. The point is, MS's compiler generates code which places additional workload on the driver developer to "undo", thus falsifying the premise that MS's compiler makes IHV drivers easier to develop.
No. It makes developing an optimal translation more difficult but a "naive" translation very easy.

We've been through this several times already. GLSlang provides the opportunity for greater levels of optimisation but assumes the IHV will do a good job and that may take some time. The HLSL will get to a reasonable result faster (assuming HW that is "close" to what is actually described in DX8/9!) but, in the long run, probably offers far less opportunity to get to optimal code.


We're just going around in circles.
 
Humus said:
JohnH said:
A conditional should be left as a conditional, no argument there, although many of the combinations you list are _easy_ to convert back into something different (generally all but LRP).

Well, case on case on case ... finally you'll find that after all the cases provided so far in this thread where an IM format is inadequate ultimately leads us closer and closer to driver side HLSL. There's not much left that MS compiler can do that is universally useful. It will end up doing nothing really, except making the driver programmers life more miserable trying to sort out what it screwed up.
I suggest you actually go away and write a compiler that works with the IM format, when you do this you'll see none of this is a big deal.

JohnH said:
If your HW is only really capable of supporting profile X, and thats all you attempt to expose then its a whole lot easier. Problems only start to arise when someone gets it wrong, and then attempts to claim that something is something that it isn't.

Except that you also need to support Y,Z,W,U,G,E and A which were released in previous API revisions.
I think you'll find over the next five years it won't go beyond four or five e.g. pre 2.0 (a simplification that I think you'll find a lot of ISV's agree with), 2.0, 3.0 and 4.0. Thats four profiles, with an awful lot of longevity.

Profiles are a solution to a problem that you have repeatedly failed to address.

You have failed to explain in what way profiles solves anything.
I have repeatedly explained what profiles provide, I really can't be bothered to type it all in again.
Yes probably quite idle, irrespective of their presence or the compiler used, I think you'll find that the majority if shaders in half life don't use sincos.

That's beyond the point. What about tomorrow? What hardware will be left unused in the future because MS compiler didn't support it? How about this, I've read (but don't know really if it's true) that ATI has a MAD2X instruction in the R300, that does 2 * a * b + c. It's not exposed through any profile, thus left unused. It would be useful for instance for accelerating the reflect() function, and cut down the instruction count from 3 to 2. But no, MS will force an unneccesary MUL instruction in there.
Its exactly the point. Ignoring the point that you seem to think its impossible to optimise something once its been converted to IM format(incedentally, any case where you're multiplying by a staic constant is easy to spot), you also seem to think its a good idea to introduce more and more complicated operations into the shader. I suggest you look at the history of CPU development. Spending your area accelerating basic functions will give greater overall returns than picking out high level functions and adding dedicated HW.

Of course you're completely ignoring the HUGE difficulties created by having to support all features in GLSlang if you expose the extensions, this will only go away when HW that fully supports it is available, and then you're still left with what to do for the huge pile of legacy HW out there.

No. That's easily fixed. The software renderer takes over. Like it has always done in OpenGL, and so far has worked just fine.

I think you find it doesn't work wrt "real time" gaming applications, generally, when talking about rasterisation functions, if its SW it doesn't get used. Otherwise you just end up titles that run at non existent frame rates on some unknown proportion of HW, maybe you can explain how that is an acceptable solution ?


John.
 
Simon F said:
We've been through this several times already. GLSlang provides the opportunity for greater levels of optimisation but assumes the IHV will do a good job and that may take some time. The HLSL will get to a reasonable result faster (assuming HW that is "close" to what is actually described in DX8/9!) but, in the long run, probably offers far less opportunity to get to optimal code.

I hereby nominate Simon for Poster of the Year!
 
Simon F said:
We've been through this several times already. GLSlang provides the opportunity for greater levels of optimisation but assumes the IHV will do a good job and that may take some time.
No, it requires that IHV's will do a good job. More specifically, games that make use of GLSlang will require that IHV's do a good job with compilers. For one, I think we can be assured that John Carmack's next big project will make heavy use of GLSlang.

And, what's more, it makes it easier for IHV's to do a good job at optimizing.
 
Chalnoth said:
No, it requires that IHV's will do a good job....

Right. Requires more work / effort from the IHV.

More specifically, games that make use of GLSlang will require that IHV's do a good job with compilers. For one, I think we can be assured that John Carmack's next big project will make heavy use of GLSlang.

Yes, so we can be assured that in what, 4 years time (when Carmack's next big project hits the street), that at least those IHVs with the resources "required" to have "good" GLSL compilers, will have enough compiler support to play Carmack's engines.

And, what's more, it makes it easier for IHV's to do a good job at optimizing.

Which no one ever disputed.
 
JohnH said:
I think you'll find over the next five years it won't go beyond four or five e.g. pre 2.0 (a simplification that I think you'll find a lot of ISV's agree with), 2.0, 3.0 and 4.0. Thats four profiles, with an awful lot of longevity.

Yeah, like the trend ps2.0a set, right? You suggested earlier that adding specific profiles for certain hardware like this was the solution to all the problems profiles otherwise have. So which is it? Only a few profiles, but with performance problems, or many profiles with large maintainance problems?

Its exactly the point. Ignoring the point that you seem to think its impossible to optimise something once its been converted to IM format(incedentally, any case where you're multiplying by a staic constant is easy to spot), you also seem to think its a good idea to introduce more and more complicated operations into the shader. I suggest you look at the history of CPU development. Spending your area accelerating basic functions will give greater overall returns than picking out high level functions and adding dedicated HW.

Yes, so when it detects that MUL and removes it, then it should have another instruction slot free for use. But no, the profile would just fail to compile if the instruction count would exceed 64, regardless if the hardware can actually run it or not. Same with the added constant it add.

It's not about adding more complicated operations. It's about letting IHVs construct their hardware the way they want, not having MS enforce a particular model. Further, MAD2X is almost for free in terms of transistors if you can do MAD.

JohnH said:
I think you find it doesn't work wrt "real time" gaming applications, generally, when talking about rasterisation functions, if its SW it doesn't get used. Otherwise you just end up titles that run at non existent frame rates on some unknown proportion of HW, maybe you can explain how that is an acceptable solution ?

You need to test run it on all hardware anyway, so these problems will be spotted. Name one game or application where the OpenGL model have led to problems. I don't know any.
 
Joe DeFuria said:
Chalnoth said:
No, it requires that IHV's will do a good job....

Right. Requires more work / effort from the IHV.

No. You'll be required to do a good job, like you're required in DX. The only difference it's easier to do in GL, so it will require less work.
 
Humus said:
No. You'll be required to do a good job, like you're required in DX. The only difference it's easier to do in GL, so it will require less work.

Round and round we go...

You'll be required to do a good job in GL, just to get a "naive implementation", unlike DX, where it's not as difficult.

....where she stops, nobody knows....
 
My doctor has told me to leave this thread alone, but one last one...

Humus said:
JohnH said:
I think you find it doesn't work wrt "real time" gaming applications, generally, when talking about rasterisation functions, if its SW it doesn't get used. Otherwise you just end up titles that run at non existent frame rates on some unknown proportion of HW, maybe you can explain how that is an acceptable solution ?

You need to test run it on all hardware anyway, so these problems will be spotted. Name one game or application where the OpenGL model have led to problems. I don't know any.
Thats because titles deliberately avoid using rasterisation functions that aren't going to be HW accelerated. With GLSlang you have no idea what is and isn't going to manage to be accelerated as the matrix is massively more complicated. Testing on every single combination of HW is becoming impractical, and its impossible to test on HW that doesn't exist yet, and, no, new HW doesn't automatically mean more native GLSlang support.

Anyway, enough.
John.
 
Joe DeFuria said:
Humus said:
No. You'll be required to do a good job, like you're required in DX. The only difference it's easier to do in GL, so it will require less work.

Round and round we go...

You'll be required to do a good job in GL, just to get a "naive implementation", unlike DX, where it's not as difficult.

....where she stops, nobody knows....


Wrong Joe. A naive implementation in OpenGL looks like this:

a) take GLSL grammar
b) generate parser to create AST
c) generate fragment program from AST directly. No scheduling, packing, or optimizations of any kind used.

Steps a) and b) are done for you. You can download the grammar specification today and use a parser generator. Step c) is straightforward. Go buy a copy of a compiler book, turn to the chapter on instruction selection via tree rewriting. If you don't want to do any work, there are "Code Generator Generators" just like there are Parser Generators. These work on a bottom-up rewrite system from a grammar spec.

The only difference between the "naive" OpenGL implementation and the "naive" DX implementation, is that the DX driver receives pre-parsed instruction tokens which it must do instruction selection for, and the OGL version receives source code, but the parser is already freely available. If anything, the OGL code probably is easier, since there are publically available BURG systems to automate code generation from ASTs, but there is no publically available tool that I am aware of for mapping DX instruction streams.
 
JohnH said:
I have repeatedly explained what profiles provide, I really can't be bothered to type it all in again.

No, you have agreed that MS's compiler shouldn't be doing any optimizations since these just complicate the IHV's driver which much redo them anyway.

Therefore, profiles should not influence optimizations from MS's compiler. The compiler should try to preserve as much of the original AST as possible in the IM, that means, using branches for branches, not expanding macros, not doing CSE, or reordering blocks. Too bad the IM can't represent many functions the HW would like to accelerate.


So what's left? Profiles as simply validation tool to ensure your HLSL code "won't break" HW by using too many resources.

Do you really believe you can get away with just using the HLSL compiler to check your shader code statically, without running these shaders on REAL HARDWARE?


So why not simply argue for the use of validation tools that work with the OGL GLSLANG compiler instead?

Here's the developer process for DX:

1) write shader in HLSL
2) try to compile for PS3, PS2, PS1
3) if compiler says you exhausted resources for a particular profile, you add in a new specialized case for that PS version, go back to step 1
4) Now test your shader on hardware 1,2,3 to make sure it works without bugs, and make sure it performs adequately
5) if it's fubar on a particular HW, go back to step one and workaround

Here's the developer process for GLSLANG:
1) write shader in GLSLANG
2) use a validation tool to test shader against IHV compiler driver 1,2,3 for resource exhaustage, bugs, and performance
3) for any HW that fails, go back to step 1

You're trying to argue that MS's compiler will help you avoid actual testing on real HW, at best, it's a "first level check", the profiles themselves won't guarantee anything, as you well know, saying a driver is "DXn compliant" won't remove the need to test,test,test and workaround,workaround,workaround. It sure didn't help Valve. Complain about NV's busted HW all you want, it's got market share, and Valve had to do extra work around it since PS2.0 performed unacceptably on it.

Within 1 to 2 years, I expect most HW to essentially have unlimited shader length and enough registers that it is unlikely resources will ever be exhausted. Even today, it is unlikely you'll exhaust 16 temporaries in most shaders. The focus is going to be on making stuff run fast, not "who's got higher limits", and in that case, DX9's profiles will be in trouble compared to HLSL.

The shader profile concept for syntax and resource checking has a limited lifespan. Profiles have no concept of performance. Your argument would have merit if MS profiles stated required fillrate or instruction dispatch throughput requirements.
 
JohnH said:
Thats because titles deliberately avoid using rasterisation functions that aren't going to be HW accelerated. With GLSlang you have no idea what is and isn't going to manage to be accelerated as the matrix is massively more complicated. Testing on every single combination of HW is becoming impractical, and its impossible to test on HW that doesn't exist yet, and, no, new HW doesn't automatically mean more native GLSlang support.

Yes, like titles in DX deliberately avoid functions that aren't supported in hardware. Little difference from the application programmer's POV, but a large difference from the driver implementor's point of view. It lets them expose hardware that doesn't fully comply with the spec, and let it resort to software rendering for the cases that can't be supported in hardware. Look at my latest demo for instance. In OpenGL it would run just fine on GFFX too, cause the hardware is capable of doing everything in that demo, but nVidia can't support the DX spec for floating point render targets.

Testing on all GPUs is something you should always do anyway, regardless of API. While theorethically problems can arise with future hardware I don't know a single occasion this has happened to any GL game or application.

There are never any guarantees regarding performance. Heck, using Truform in DX is run in software too on R300. As are vertex shaders on GF2 and Radeon cards.
 
"So, we want to bring this game to market and make huge bucks. The less money spent on research and development, the larger the revenue. The moment it doesn't crash all the time anymore, freeze it and wrap it up. If it has a small time to live, that is good. Just make sure the next one is rushed out when the first one expires. More money for us!"

"But, there is a slight thingy. Ehrm. You know. People have to actually buy it. You know. How do we get them to do that?"

"Fast development! Shorter time to market! We want a common platform, that does not change. Ever. Let Microsoft make DX such, that it cannot be optimized at all. That will force hardware manufacturers to create identical chips. That cuts down the development of game-engines to zero. And all the art can be reused. We might even have to halt a game for a bit before throwing it out! :D "

"But... How about diversification? If they're all the same, why would consumers buy them?"

"Marketing! And we all thank Microsoft for the great job they do! We even believe it ourselves!"

"Ah. Electronic Arts. Sports games. Just fill in the current players. It will sell anyway. Is that what you have in mind?"

"Exactly! They showed us all how to do it!"

"So, we should freeze any and all new developments and innovation?"

"Yes, exactly! What money is there in that anyway? You have to pay to get R&D done, and hope for some result that might get your money back. Away with it!"

"But then we get to a standstill! All hardware will be the same! None will be bought anymore!"

" :) Look at Intel: Increase The Clock Rate And Market It. See? It's simple. That's how it is done. That's how you make money."

"Ehrm... Did you listen to yourself? I mean, it doesn't make sense!"

"Don't worry. I'm a manager. I know. That's why they pay me the Big Bucks. Leave it all to me."



Anyway, why am I writing this? People are still discussing the "triviality" of just replacing specific sequences of a few instructions in a random stream... Think about it, Please!

EDIT: I have my own company and one third of the shares in two others. I spend a lot of my time listening to managers "doing business".
 
DiGuru said:
"So, we want to bring this game to market and make huge bucks. The less money spent on research and development, the larger the revenue. The moment it doesn't crash all the time anymore, freeze it and wrap it up. If it has a small time to live, that is good. Just make sure the next one is rushed out when the first one expires. More money for us!"

[snip]

Anyway, why am I writing this? People are still discussing the "triviality" of just replacing specific sequences of a few instructions in a random stream... Think about it, Please!

EDIT: I have my own company and one third of the shares in two others. I spend a lot of my time listening to managers "doing business".

you made my day, diguru! :D
a good joke is one with a lot of truth in it, they say.

on a sidenote, you know why i love japanese game titles and generally prefer those to their american/european counterparts? -- because japanese titles are made with a lot of effort - and that's plain visible in the final product, artwork- and gameplay -wise. all those minor details that make you stop and wonder 'my, did they actually bother to implement this!?'. go figure, but apparently there's a slight difference in the management philosophies between the two cultures..
 
darkblu said:
on a sidenote, you know why i love japanese game titles and generally prefer those to their american/european counterparts? -- because japanese titles are made with a lot of effort - and that's plain visible in the final product, artwork- and gameplay -wise. all those minor details that make you stop and wonder 'my, did they actually bother to implement this!?'. go figure, but apparently there's a slight difference in the management philosophies between the two cultures..
Any good US game has been similarly thought-out. For example, I am of the opinion that the Bioware/Black Isle RPG's (Fallout, Baldur's Gate, Planescape: Torment, KoTOR etc.) were, largely, quite a bit more work and are more complete games than any Japanese RPG's.

From my experience with games, the Japanese style is more about telling a story. The American style is more about freedom, expressing ones' self in a game environment. And any good game, from either country, will have had tons of work poured into it.

But yes, US companies (*cough* EA *cough*) do put out tons of crap games.
 
DemoCoder said:
Maybe after the PS3.0 HW starts shipping, we'll revisit this thread, and you'll admit you were wrong.

Isn't this a blast from the past. ;)

I stumbled back upon this thread when searching for something else...

So now I'm wondering...a year later...where IS nVidia's official GLSL compiler in any decent form....for any of their chips?

I would have thought a "naive" implemention would be a snap according to some people....
 
Joe DeFuria said:
DemoCoder said:
Maybe after the PS3.0 HW starts shipping, we'll revisit this thread, and you'll admit you were wrong.

Isn't this a blast from the past. ;)

I stumbled back upon this thread when searching for something else...

So now I'm wondering...a year later...where IS nVidia's official GLSL compiler in any decent form....for any of their chips?

I would have thought a "naive" implemention would be a snap according to some people....

First, what does this have to do with NVidia? This thread had nothing to do with NVidia yet you drag them into it, and was a discussion of APIs (OGL compiler approach vs Microsoft compiler approach) What's your point? A year later? This thread is from November. OpenGL1.5 wasn't even released until Aug 2003.

1) NVidia: ftp://download.nvidia.com/developer/SDK/56.68_win2kxp_international_glsl.exe
ATI: Catalyst 3.1 and beyond
3dLabs: current P10 driver

Naive implementations right? Egg on your face Joe.

(is "decent form" supposed to be your weasel out phrase to deny the existence? Yes, they have some bugs (just like MS's compiler), but isn't that what you'd expect from a Naive implementation of something that isn't even ratified into the Core opengl yet?)

Second, everything I have said has come to pass, going back to 2-3 years ago on this bbs. I wish the archives included the old messages before 2002, I said:

1) compilers will become vitally important for future GPUs as they increase in complexity (I said this when DX8 came out)

2) That problems inherent with Microsoft's compiler could sabotage future GPU performance (example I gave was hardware support for Normalize(), lo-and-behold, Nvidia added it, for nrm_pp). FXC has finally been fixed to generate NRM. Too bad for any games that don't have the NRM macro in the DX9 shaders. Let's hope the drivers can detect it.

Now the issue is what happens to co-issue/dual-issue scheduling, and choosing cmp vs predicate vs dynamic branch. MS will need to create a PS_3_0_NV40 profile, since generic compilation to PS3.0 will be unaware of the varying costs (say of a branch on NV40 vs 3dLabs Realizm)

3) Most of the work is in the backend. Once ATI and NVidia complete the test GLSL extensions for NV30/R300, it will be much easier to translate these to the NV40/R420.

Next time you did up an old thread to harass me, maybe you should choose one where I was uncontrovertably wrong.
 
DemoCoder said:
First, what does this have to do with NVidia?

Um, nVidia just launched PS 3.0 hardware, perhaps? I'm waiting for nVidia to deliver their GLSL drivers before they deliver a HLSL driver that supports PS 3.0?

Does nVidia even have a GLSL driver that is officially exposed in their public drivers? They've had HLSL support for quite a while now.

(is "decent form" supposed to be your weasel out phrase to deny the existence?

No, "naive" implementation was something heavily disccused throughout this topic.

1) compilers will become vitally important for future GPUs as they increase in complexity (I said this when DX8 came out)

No argument there. (Would anyone argue such an obvious assertion?)

2) That problems inherent with Microsoft's compiler could sabotage future GPU performance (example I gave was hardware support for Normalize(), lo-and-behold, Nvidia added it, for nrm_pp). FXC has finally been fixed to generate NRM. Too bad for any games that don't have the NRM macro in the DX9 shaders. Let's hope the drivers can detect it.

Now the issue is what happens to co-issue/dual-issue scheduling, and choosing cmp vs predicate vs dynamic branch. MS will need to create a PS_3_0_NV40 profile, since generic compilation to PS3.0 will be unaware of the varying costs (say of a branch on NV40 vs 3dLabs Realizm)

And here I thought you also said something like:

You said:
Hey, just bought by new Radeon11000, but my first DX9 driver runs shaders slower than the R300. Oops, 3 months later, it's on par. Ooops, another 3 months later and it's 300% better.

Yeah...the 6800 really seems to suffer with PS 2.0 shaders right off the bat...

Next time you did up an old thread to harass me, maybe you should choose one where I was uncontrovertable wrong.

Next time, understand the entire argument being made. I think we'll be seeing PS 3.0 HLSL support from day one....and YES...it will need further optimization...but it will be there.
 
Joe DeFuria said:
Um, nVidia just launched PS 3.0 hardware, perhaps? I'm waiting for nVidia to deliver their GLSL drivers before they deliver a HLSL driver that supports PS 3.0?

First of all, I said shipping, not launched, implying near term drivers when it goes into the hands of consumers. (any NV40s being sold on shelves yet with final drivers?) Secondly, where did I ever say NVidia would deliver a GLSL driver before a PS3.0 driver, or for ANY vendor for that matter? Do you find the need to put words into people's mouths to buttress your argument.

I simply said that most of the hard work of optimizing is already done by the *driver compiler* which works on PS2.x token streams. I said writing the PARSER for the high level language is trivial (it is), and but that problems in the intermediate representation (PS2.x) and how Microsoft's static compiler chooses instructions, could cause problems that a GLSL compiler inside the driver wouldn't.

Get a friggin clue.

Does nVidia even have a GLSL driver that is officially exposed in their public drivers? They've had HLSL support for quite a while now.

Technicall, MS provides HLSL support in D3DX, not NVidia. But of course, NVidia has a HLSL compiler of their own written from scratch, thanks to Cg. But isn't writing compilers supposed to be so hard that IHVs can't do it in a timely manner? How'd Nvidia write their own HLSL compiler 1+ year ago?

(there are few differences between the HLSL language and the GLSL language)

1) compilers will become vitally important for future GPUs as they increase in complexity (I said this when DX8 came out)

No argument there. (Would anyone argue such an obvious assertion?)

Obvious in hindsight for you, not obvious in foresight when I said it. Several people criticized my statement saying that optimizing the assembly for GPUs was "trivial" to schedule, and therefore, not a problem. Of course, DX8 was the only shding language at the time, and it did look trivial. No branches, no register limits, basically register combiners. What looked trivial in 2001 doesn't look so trivial now.

And here I thought you also said something like:

You said:
Hey, just bought by new Radeon11000, but my first DX9 driver runs shaders slower than the R300. Oops, 3 months later, it's on par. Ooops, another 3 months later and it's 300% better.

Yeah...the 6800 really seems to suffer with PS 2.0 shaders right off the bat...

The point of that quote (taken out of context by you again because of your lack of understanding) was that *drivers and driver compilers* can be upgraded by IHVs to give performance boosts, where as static compilers like Microsofts required rebuilding your game and shipping a PATCH.

The analogy was to say that if a problem was found in MS's compiler and fixed, developers would have to ship game patches before end users could realize the improved performance, whereas with *compilers built into drivers* IHVs can ship drivers with give across the board performance increases. Compilers are already inside today's drivers, so the discussion is moot. The only thing not inside the driver is the high level grammar parser, some generic optimizations, and instruction selection.


Next time, understand the entire argument being made. I think we'll be seeing PS 3.0 HLSL support from day one....and YES...it will need further optimization...but it will be there.

You're the one who doesn't understand compiler issues Joe, not me.
 
DemoCoder said:
First of all, I said shipping, not launched, implying near term drivers when it goes into the hands of consumers. (any NV40s being sold on shelves yet with final drivers?)

Awww...we're getting picky now, eh?

No, there are no NV40s being sold, just year old NV30s...

Secondly, where did I ever say NVidia would deliver a GLSL driver before a PS3.0 driver, or for ANY vendor for that matter?

So, if you don't disagree with me...why argur?

Do you find the need to put words into people's mouths to buttress your argument.

Do you need to make side arguments that are only tangental to mine to avoid conceding a point?


I simply said that most of the hard work of optimizing is already done by the *driver compiler* which works on PS2.x token streams. I said writing the PARSER for the high level language is trivial (it is), and but that problems in the intermediate representation (PS2.x) and how Microsoft's static compiler chooses instructions, could cause problems that a GLSL compiler inside the driver wouldn't.

Get a friggin clue.

I'm still waiting to see how all of this (GLSL approach) has in fact practically benefitted the consumer. Get your own friggin clue.

Technicall, MS provides HLSL support in D3DX, not NVidia. But of course, NVidia has a HLSL compiler of their own written from scratch, thanks to Cg. But isn't writing compilers supposed to be so hard that IHVs can't do it in a timely manner?

GOOD and robust ones?

How'd Nvidia write their own HLSL compiler 1+ year ago?

BEcause it was a piece of crap 1+ years ago? Or did was (and still is?) HLSL compiling BETTER (performance and bugs) than Gc not exactly what happened? I could've sworn that the Cg compiler was a big sore spot of the whole effort...

The analogy was to say that if a problem was found in MS's compiler and fixed, developers would have to ship game patches before end users could realize the improved performance, whereas with *compilers built into drivers* IHVs can ship drivers with give across the board performance increases.

So in any situation...you still need to download "a patch." And of course...with "compilers built into drivers" that have "across the board increases / fixes", you seem to have negated the "it can BREAK apps that happened to 'rely' on the older functionality" effect.

Yay...id tells me to download the latest driver, which does wonders for Doom3...but breaks whatever other game someone decides to actually code in GL.

You're the one who doesn't understand compiler issues Joe, not me.

I'm also the one who has actually seen and touched a few games that have been released / compiled with HLSL shaders. Still waiting for those GLSL titles...for all it's superiority....
 
Back
Top