Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
DeanoC said:
DemoCoder said:
What's the NV30 driver to do? Try to "recognize" a sequence of 8 instructions and assume it's a SINCOS?
And thats a bug in the HLSL compiler, in the same way that a GLSLANG compiler (and sometimes certainly will) could produce code that 'forgets' to us a hardware instruction. All code will have bugs...
No. It's a direct result of the forced low-level standard intermediate format. It's not a bug, it comes from a design decision.

We need to remove the potential for drivers bugs not increase it, else PC games development costs are going to sky rocket. Thats my main issue for GLSLANG over D3DX (HLSL is a static library, its not a part of the driver/runtime. The version I ship is the version everybody uses).
D3DX has bugs. Drivers that compile from the assembly to the machine probably have some bugs.

By combining these two things into one software package, I'd say you're reducing the chance for bugs to occur.

It's kind of like the striped raid format. Individually, each hard drive has the same chance of failure. But once you stripe them, a failure in either hard drive can cause total data loss, doubling your chance of data loss. If you want data security, you don't want to spread out your data across multiple drives.

Similarly, if you want fewer bugs, you don't want to spread out your programming among multiple companies.
 
DeanoC said:
DemoCoder said:
What's the NV30 driver to do? Try to "recognize" a sequence of 8 instructions and assume it's a SINCOS?

Adding support for high level function instrinics isn't a return to fixed function, since there is usually an adequate software emulation of these functions (e.g. SINCOS), it's simply giving the driver the opportunity to detect and replace a SINCOS() call to native HW instead of letting the silicin set idle and burning up regular vector unit slots.

You're missing the point, DX9 is simply missing lots of "instructions" for functions which could be accelerated by HW. Various presentations from NVidia and ATI even advise developers against using the DX9 assembly macros, and compiler writers have shyed away from it.

Is it really a compiler bug, or is it intentional on MS's part. Anyway, in the OGL2.0 case, no one would have to wait for MS to release a patch, and for developers to "relink" their applications with recompiled shaders. NVidia, ATI, et al, would simply release new driver versions that fix the performance bugs as they get embarrassed when competitors fix them.

How long must we wait for the Microsoft monopoly to fix the macro expansion "bugs" in FXC? Will the fix be available by DX10? In the next 3 months? Next 6 months?
 
Chalnoth said:
D3DX has bugs. Drivers that compile from the assembly to the machine probably have some bugs.

By combining these two things into one software package, I'd say you're reducing the chance for bugs to occur.

It's kind of like the striped raid format. Individually, each hard drive has the same chance of failure. But once you stripe them, a failure in either hard drive can cause total data loss, doubling your chance of data loss. If you want data security, you don't want to spread out your data across multiple drives.

Similarly, if you want fewer bugs, you don't want to spread out your programming among multiple companies.
But the version I ship of D3DX (linked in) has a contained set of bugs. No more/no less. When you really on an external package that realys on on user assistance to be replaced (i.e. drivers) you have to assume that the user didn't do it. In otherwise if you write a good game today, you are working around every bug in every driver for a least the last 3 years.

This is the reality of the PC games business, I still work around bugs from over 5 years ago because they still come back on the test matrix. Because somebody somewhere is using the original GF1 driver that shipped with the 1st card, scary isn't it?

If you want to ship a game on PC that works on most peoples cards you have to work around all drivers bugs for your entire range of supported cards. With D3DX at least the bug test matrix is fixed at shipping time, the very real effect is that saves alot of money.
 
DemoCoder said:
You're missing the point, DX9 is simply missing lots of "instructions" for functions which could be accelerated by HW. Various presentations from NVidia and ATI even advise developers against using the DX9 assembly macros, and compiler writers have shyed away from it.

Is it really a compiler bug, or is it intentional on MS's part. Anyway, in the OGL2.0 case, no one would have to wait for MS to release a patch, and for developers to "relink" their applications with recompiled shaders. NVidia, ATI, et al, would simply release new driver versions that fix the performance bugs as they get embarrassed when competitors fix them.

How long must we wait for the Microsoft monopoly to fix the macro expansion "bugs" in FXC? Will the fix be available by DX10? In the next 3 months? Next 6 months?

It makes no difference if a bug is fixed in a driver, once its there you HAVE to work around it. You simple cannot ask the user to install new drivers.
Having ATI or NVIDIA fix it, doesn't matter if it ships in a WHQL driver its an issue for the entire life of the card.

I.e. Most people with ATI R3x0 will never even have a OGL 1.5 + driver let along a fix!

Publisher want the easiest/cheapest way of getting the product on the customers machine. Anything the increases the test matrix is bad in there eyes, thats why MS won with D3D, D3D is cheaper than OpenGL for the publisher who wants to target the mainstream PC user. This is a business and if D3DX HLSL save 0.1$ per copy sold, then this argument is moot.

I'm sorry but most people aren't looking at the business case for the D3DX method. Remember this is a several billion $ business and personal preference and technical issues don't really come in it.

From a purely techincal point I agree with many of your points but you have to also calculate the development cost and wether that cost is worth trading gameplay for (which is the ultimate cost, fixed time and budget, I can either write bugs workarounds or write the game).

Thats why consoles have a distinct advantage for use games developers. Personally from a business point of view (I have ethical issues with it mind), I'd be happy with an automatic driver system, so the user got whatever driver was prescribed via MS with not way of fiddling/stopping. Its would certainly save money...
 
DeanoC said:
If you want to ship a game on PC that works on most peoples cards you have to work around all drivers bugs for your entire range of supported cards. With D3DX at least the bug test matrix is fixed at shipping time, the very real effect is that saves alot of money.
All you've done, as you state, is added a set of "fixed" bugs to the system. There's no need for those bugs to be there.

And game developers have, for quite some time, asked people do update their drivers when problems occur. There's not that much reason to pander to people who don't update their drivers. I've seen many a game developer request that users update their drivers.
 
Oh, one side note:

Business decisions coming in front of developer decisions are what cause crappy games. The popular games are the ones where the developers care less about money (in the short-term) and more about getting the game right.

I think what you're seeing is a result of publishers being very short-term gain oriented. This is a disease with the American business culture, and many businesses suffer because of it.
 
Chalnoth said:
And game developers have, for quite some time, asked people do update their drivers when problems occur. There's not that much reason to pander to people who don't update their drivers. I've seen many a game developer request that users update their drivers.

Request is fine, but many publishers will not let you ship a game, if it crashes or has visual errors on a WHQL driver.

I don't make the rules, thats publishers for you. Different publishers different rules, also the size/name of the developers gives you more control.
 
Chalnoth said:
Oh, one side note:

Business decisions coming in front of developer decisions are what cause crappy games. The popular games are the ones where the developers care less about money (in the short-term) and more about getting the game right.

I think what you're seeing is a result of publishers being very short-term gain oriented. This is a disease with the American business culture, and many businesses suffer because of it.

But ask yourself this, would you be happy to risk $5 million on riskier long-term gains.

Much as I disagree with lots of publishers discussion, it is ALOT of money....
 
DemoCoder said:
How long must we wait for the Microsoft monopoly to fix the macro expansion "bugs" in FXC? Will the fix be available by DX10? In the next 3 months? Next 6 months?

Yeah, probably about the same time GLSLang support is actually available at all?
 
DeanoC said:
It makes no difference if a bug is fixed in a driver, once its there you HAVE to work around it. You simple cannot ask the user to install new drivers.
Having ATI or NVIDIA fix it, doesn't matter if it ships in a WHQL driver its an issue for the entire life of the card.

No, it DOES make a difference. What we are talking about is not a "bug", it's a design flaw in the DX9 intermediate representation. There's no need to "work around" the fact that normalize() and sincos() get inlined by FXC, the code still works, generates absolutely correct results, and is bug free. The difference is, in the OpenGL2.0 case, future drivers will automatically boost performance in old games, because they can optimize better. With the DX approach, users will not see such performance improvements even if they elect to upgrade drivers, because the code is statically linked and there isn't as much information for the driver to work with.


Don't tell me publishers don't force upgrade users, I'd had to go through this process many times during install. But even if the users aren't forced to upgrade, atleast they have the OPTION to upgrade to higher performance in the future. I like the fact that when the R400 and NV40 ship, the first WHQL will be "early to market, 'atleast they work'" drivers, but later I can download much faster drivers, after the software catches up with the HW, and my old games benefit automatically.

I really don't understand your argument. Your acknowledge the generating the SINCOS or NRM macros from FXC would be better, and hence, you acknowledge there is merit to preserving high level information for the driver, thus, you agree that the driver should be performing the optimizations. This is exactly what OpenGL2.0 delivers and it is a flaw in the current DX9 implementaton and model.

But rather than argue that DX9 needs to be fixed, and that OpenGL2.0 has an advantage, you fall back on the tired old theory that somehow, the front-end parser in the driver is going to be the cause of all ill, when in fact, today, DX9 drivers MUST contend with implementing optimizing compilers anyway, since the nature of DX9 intermediate representation is that it doesn't map directly to the hardware, and that the parser traditionally in compilers it the least buggy peace of code, due to the formal nature of it. All of the anti-compiler-in-driver arguments are moot, since today, the majority of the optimizer resides in the driver ALREADY, and furthermore, it's likely to be far more buggy, due to having to deal with DX9 I-rep.


So the last resort is to claim that there is a business case to statically linking the compiler. But if that's true, there is a business case to linking the entire WHQL driver for all cards, so you could ship a CD with an entire closed-world environment, tried and tested, by QA -- a console-like approach.

Obviously, that's absurd. So there's a limit to how far you can take the "business case" argument. I mean, the business case would argue for not even wasting your time on DX9 and shooting for the DX7 market, where you'll get the most money.

How about this business case: OpenGL is easier to use and develop in, and you don't need to pay MS $$$ for a dev environment and MSDN access to use it.
 
Joe DeFuria said:
DemoCoder said:
How long must we wait for the Microsoft monopoly to fix the macro expansion "bugs" in FXC? Will the fix be available by DX10? In the next 3 months? Next 6 months?

Yeah, probably about the same time GLSLang support is actually available at all?

Or when those HLSL DX9 games like HL2 are shipped?

Come on Joe, you can do better than that. You know that once OGL2.0 compliant cards are shipped, NV and ATI drivers will be updated much more regularly than DX9 releases, and since the D3DX compiler is statically linked, the MS fixes won't help end users at all.
 
DeanoC said:
Request is fine, but many publishers will not let you ship a game, if it crashes or has visual errors on a WHQL driver.

I don't make the rules, thats publishers for you. Different publishers different rules, also the size/name of the developers gives you more control.
Well, given that a number of games have shipped with GLSetup, obviously not all publishers have that rule.

GLSetup is effectively the alternative to WHQL: it just makes sure you have up-to-date OpenGL drivers. As far as I know, WHQL doesn't test OpenGL functionality (well, it may test core functionality, but that hardly matters today...).
 
DemoCoder said:
Joe DeFuria said:
DemoCoder said:
How long must we wait for the Microsoft monopoly to fix the macro expansion "bugs" in FXC? Will the fix be available by DX10? In the next 3 months? Next 6 months?

Yeah, probably about the same time GLSLang support is actually available at all?

Or when those HLSL DX9 games like HL2 are shipped?

They'll certainly be shipped before those GLSLang games are shipped.

Come on Joe, you can do better than that.

No need to.

You know that once OGL2.0 compliant cards are shipped, NV and ATI drivers will be updated much more regularly than DX9 releases,

Right...because the initial versions (as expected) will be seriously in need of increased optimizations and bug fixes.

and since the D3DX compiler is statically linked, the MS fixes won't help end users at all.

Nor will the MS fixes break the end-user's stuff.

Come on DemoCoder, you're capable of more than the one-dimensional line of thought you're displaying in this thread.

No one is arguing that the GL Model doesn't have advantages. Both GL and DX models have some balance between central control / stability and flexibility. For the consumer environment, I just believe that the DX model's balance (toward stability) is better suited. Not that GL is utter crap or something.

I certainly hope that you believe that stability and lack of flexibility has certain advantages for consumers?

So the last resort is to claim that there is a business case to statically linking the compiler. But if that's true, there is a business case to linking the entire WHQL driver for all cards, so you could ship a CD with an entire closed-world environment, tried and tested, by QA -- a console-like approach.

Yes, there is a case to be made for that.

Obviously, that's absurd.

Yes, and I think we'd all agree that's absurd for the PC.

So there's a limit to how far you can take the "business case" argument.

Point? There's also a limit to how far you can take "API models and should have more flexibility." The last resort is to claim that every vendor makes their own API, language, compiler, etc. Obviously, that's absurd.

Both the GL and DX models strike a balance between these two extremes. For the consumer space, I simply prefer the balance be tipped toward stability.
 
Joe DeFuria said:
and since the D3DX compiler is statically linked, the MS fixes won't help end users at all.

Nor will the MS fixes break the end-user's stuff.

But MS's "fixes" do introduce bugs, only D3DX changes won't. On the other hand, to work around D3DX limitations, IHVs have to write far more intelligent drivers to perform optimizations, so all the MS model does is make more work for the compiler authors, and increase the change that IHVs will ship a buggy DX9 driver optimizer.


Come on DemoCoder, you're capable of more than the one-dimensional line of thought you're displaying in this thread.

I certainly hope that you believe that stability and lack of flexibility has certain advantages for consumers?

Yes, I own every major console ever made. On the other hand, I own a PC for a reason. If I only wanted to play console games, I'd stick to using my PC for apps.

Moreover, the fallacy here is that the MS model will significantly reduce the cost of development and number of bugs. I don't believe it will have any appreciable impact compared to GL2. It's limitations without cost benefits.


Both the GL and DX models strike a balance between these two extremes. For the consumer space, I simply prefer the balance be tipped toward stability.

So you're prepared to prove that having to "work around" the limitations of DX9 intermediate assembly will make IHVs write less buggy drivers? It sure worked in NVidia's case. :)
 
DemoCoder said:
But MS's "fixes" do introduce bugs, only D3DX changes won't. On the other hand, to work around D3DX limitations, IHVs have to write far more intelligent drivers to perform optimizations, so all the MS model does is make more work for the compiler authors...

MS makes more work for compiler authors, when GL provides no type of compiler at all?

and increase the change that IHVs will ship a buggy DX9 driver optimizer.

Let's see how long it takes someone like XGI to support GLSLang...


Yes, I own every major console ever made. On the other hand, I own a PC for a reason. If I only wanted to play console games, I'd stick to using my PC for apps.

You are not representative of "consumers". Consumers WANT to be able to plug in a game and it "just works". They don't care if it's a PC or not. The point is, Console "stability" is a major benefit of the console model.

Of course, the "benefit" of the PC platform is upgradeability / flexibility.

For PC games, consumers want both, of course. They want both the "out of the box" experience, and better performance with upgrades. I'm just saying that IMO, for consumers in general, I'd rather trade off a bit of flexibility / performance, for stability.

Moreover, the fallacy here is that the MS model will significantly reduce the cost of development and number of bugs.

Why is this fallacy?

The fallacy here is that the GL model will provide a significantly better end result in terms of performance. I don't believe it will have any appreciable impact compared to DX.

So you're prepared to prove that having to "work around" the limitations of DX9 intermediate assembly will make IHVs write less buggy drivers? It sure worked in NVidia's case. :)

You're prepared to prove that having to write your own GLSLang compiler has negligible increased development cost for IHVs? Again, we'll see how long it takes these vendors to come up with GLSLang support vs. DX9 support. XGI will be interesting to watch...
 
DemoCoder said:
<SNIP>
How about this business case: OpenGL is easier to use and develop in, and you don't need to pay MS $$$ for a dev environment and MSDN access to use it.

Don't change the subject to MS is an evil monopoly. I couldn't care less who makes the tools I use... Just as long as they let me make better games.

IF my fears are unfounded (that there aren't many bugs caused by every vendor doing it all themselves including the small IHVs) fine, you win the argument as the techinical advantages will outway the issue. I have no problem using the best tool to do my job.

BUT I was answering you honestly that as a professional I CANNOT recommand GLSLANG until I've seen this issue for real. Too much money rests on it, and making a mistake like this would cost people there jobs.

GLSLANG deosn't exist in any shipping driver, so for now I have no way of judging number of bugs. With the amount of money we are talking about I'll take the safer option that still gives me a fair amount of the performance the hardware has.
 
Chalnoth said:
DeanoC said:
Request is fine, but many publishers will not let you ship a game, if it crashes or has visual errors on a WHQL driver.

I don't make the rules, thats publishers for you. Different publishers different rules, also the size/name of the developers gives you more control.
Well, given that a number of games have shipped with GLSetup, obviously not all publishers have that rule.

GLSetup is effectively the alternative to WHQL: it just makes sure you have up-to-date OpenGL drivers. As far as I know, WHQL doesn't test OpenGL functionality (well, it may test core functionality, but that hardly matters today...).

GLSetup hasn't been updated for years... The website doesn't even exist anymore.
 
Joe DeFuria said:
No one is arguing that the GL Model doesn't have advantages. Both GL and DX models have some balance between central control / stability and flexibility. For the consumer environment, I just believe that the DX model's balance (toward stability) is better suited. Not that GL is utter crap or something.
I think it's the exact opposite.

OpenGL's "balance" has always been more towards stability than DirectX's has.

One case in point: specifications. OpenGL specifications are very specific. DirectX's are not. DirectX seems to simply give a vague idea of what something is supposed to do, then puts out a reference renderer and then Microsoft basically says, "do what the reference does."

In this way, the existence of refrast in DirectX has allowed Microsoft to get sloppy. Hence we have complaints by various developers of "bugs" in different IHV's drivers that are nothing more than different interpretations of a sloppy spec.

Now, in the same way, I expect OpenGL 2.0 to be more reliable and stable, in the long-run, than DirectX 9 HLSL, due to exacting specifications and multiple IHV's having to write compilers.
 
DeanoC said:
DemoCoder said:
<SNIP>
How about this business case: OpenGL is easier to use and develop in, and you don't need to pay MS $$$ for a dev environment and MSDN access to use it.

Don't change the subject to MS is an evil monopoly. I couldn't care less who makes the tools I use... Just as long as they let me make better games.

that's the whole crux of this thread - which is the better tool. aside from the fact that in most people's eyes 'better tools' != 'monopoly of the tool maker', you apparently believe ms tends to produce the better tools. why? simply becasue they're available here and now? or becasue they dictate the course of matters on the windows paltform?

IF my fears are unfounded (that there aren't many bugs caused by every vendor doing it all themselves including the small IHVs) fine, you win the argument as the techinical advantages will outway the issue. I have no problem using the best tool to do my job.

small IHV's don't have to reinvent the wheel, i.e. they could, but they don't have to. it's an open standard, after all.

BUT I was answering you honestly that as a professional I CANNOT recommand GLSLANG until I've seen this issue for real. Too much money rests on it, and making a mistake like this would cost people there jobs.

nobody would recommend using glslang for a 1-2years project at this moment. OTH, a business case for a short timespan does not prove anything in the longer run.

GLSLANG deosn't exist in any shipping driver, so for now I have no way of judging number of bugs. With the amount of money we are talking about I'll take the safer option that still gives me a fair amount of the performance the hardware has.

that without a doubt is a valid 'hear'n'now' statemanet. but we must try looking in perspective -- after all GLslang is a perspective, and i believe that's what this whole thread is about - looking in perspective.
 
Back
Top