Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
DemoCoder said:
Well, the two other consoles (PS2 and Cube) don't run DX9. If I were developing a cross platform title (like EA games), I'd go with OpenGL definately because of the portability.
They don't run OpenGL either, to any useful extent :rolleyes:
 
You already have this situation, since the drivers still have to "compile" DX9 assembly code into low-level GPU code, and as we have seen with NVidia's drivers, there are vast variations that can occur, both in speed, and in bug regressions.

OpenGL's approach just substitutes one form of compilation for another. With MS's approach, they generate (potentially non-optimal or even pathological code) for your HW architecture, and then you have to go through extraordinary effort in the driver to rectify this and optimal the DX9 code that was handed to you.

With OpenGL, the compiler generates code directly to the low-level GPU, and no "rectify potentially pathlogical MS code output" as Nvidia has to do today.

Both approaches add costs to QA for game developers. Everytime a new driver comes out, that's just one more set of things a gamedev has to test against. For example, a DX9 game vendor not only has to test to make sure his shaders are correct on FX5200, 5800, and 5900 (all three have different ways of handling shaders in the driver), and ATI, but they also must test against various WHQL driver versions, unless they force upgrade you to a specific WHQL driver, but that is dangerous, because two different games might expect (and be qualified against) two different Detonator versions. Not a good situation.

(Launch game a: Dialog: you must upgrade to Detonator XX.YY)
(Launch game b: Dialog: This game requires you run Detonator ZZ.WW)


See the other HLSL thread for details and comparisons on the compilation approaches.

The issue of compiler variations is a red herring. Microsoft's "compiler" is only a half a compiler, the drivers themselves still have to do alot of heavy lifting, and will have the same variations you are afraid of (again, example: NVidia's various PS2.0 techniques that have varied on NV30, 31, and 35 over driver revisions)
 
NeARAZ said:
DemoCoder said:
Well, the two other consoles (PS2 and Cube) don't run DX9. If I were developing a cross platform title (like EA games), I'd go with OpenGL definately because of the portability.
They don't run OpenGL either, to any useful extent :rolleyes:

Yeah I know, but clearly it's the logical choice if you're targeting 3 or 4 platforms.
 
I thought OGL was only as far as 1.5, and that 1.5 was very new? If this is so, how can a valid comparison possibly be carried out, especially in relation to OGL 2.0?
 
MDolenc said:
As one of those who voted for DX9, here is why:
Though I personally also like the OpenGL approach to high level shaders I think that approach MS currently took isn't bad either and I see one potentially very big problem with OpenGL approach. When you put a HL shader compiler into driver you basically say that IHV's have one more (complex) problem for their driver team. How many of them do you think are able to put a good Glslang compiler out now (Cg anyone)? And once they have a compiler out, you'll basically have to deal with lots of different versions of drivers/compilers (since compilers will evolve with drivers). And if you are a pro, I think it's a lot better that you see that a compiler X doesn't do a good job at one of your shaders, than that one of your users (gamers) comes to you: "why the hell does your game crawl on brand new graphics card Y? You suck!" Or are we all going to adopt John Carmack's philosophy: "...and if a compiler does a really bad job..." Becouse you WILL have X different vendors with C(X) different compilers.
This is my current opinion and I haven't seen a working Glslang solution (and not even a not working one) and so I think it's still to early to judge which approach is better, because we all we have now is one approach that works and one that is potentially better.

quite a valid consideration, despite the fact that it stems from the assumption that HLSL compilers will still be written by graphics drivers people w/o much, if any, expertese in compilers (which seems to have been the case with Cg). otherwise your concern with GL2 'vendors incoherence' is pretty much the same as with pre-2 GL versions, where the vendors had to write their GL cores - practice shows they actually managed to.

though relying on a single HLSL cc vendor may seem like the better idea at first, it also has another side of the medal: what happens with your shaders when the next dx HLSL compiler exibits a certain bug which screws your shader x (and probably y and z too) - you do what:
a) you dropt the problematic shader altogether?
b) you advise your customers not to upgrade their dx to the 'latest and greatest'?
c) you try to circumvent the compiler's bug (with the perspective of doing that again in the next dx). of course when talking about shaders that may not always be possible as it is not just functionality that is sought but also performance.
bottomline being, once a problem with ms's compiler emerges, you'll have quite an issue. whereas with the glslang approach where the HLSL is vendor-specific, chances are you'll have issues with some but be fine with others. but what is important, those vendors who have issues with their compilers will have quite an incentive to level the issues (just like they have had with pre-2 openGL implementation), or if nothing else, you can always advise your customers to stick with a particular version of this vendor's driver. whereas ms seldom has such an incentive to fix stuff in their compilers (they consider themselves as 'the ones'), and pracitce shows their compilers thend to exhibit a considerable longevity of the bugs inside.
 
darkblu said:
what happens with your shaders when the next dx HLSL compiler exibits a certain bug which screws your shader x (and probably y and z too) - you do what:
The thing is, it can't screw it up. HLSL compiler is part of D3DX, and that is statically linked to your app. Thus, the compiler is in your exe - it can't be screwed up by updating DX. That was one of the reasons why D3DX is statically linked...
 
DemoCoder said:
Yeah I know, but clearly it's the logical choice if you're targeting 3 or 4 platforms.
But for anything "serious" you'll have different (quite radically different) low-level rendering paths in the engine. One for PC, one for PS2, one for GC, and even one more for X-Box (because with X-Box you have access to much lower level than with DX on PC). You just can't fit one OpenGL on all these hardware plarforms (yeah you can use it on different OSes on PC, but that's where it ends).
 
NeARAZ said:
darkblu said:
what happens with your shaders when the next dx HLSL compiler exibits a certain bug which screws your shader x (and probably y and z too) - you do what:
The thing is, it can't screw it up. HLSL compiler is part of D3DX, and that is statically linked to your app. Thus, the compiler is in your exe - it can't be screwed up by updating DX. That was one of the reasons why D3DX is statically linked...

doh, didn't realise the compiler was part of the d3dx, my bad.
 
NeARAZ said:
The thing is, it can't screw it up. HLSL compiler is part of D3DX, and that is statically linked to your app. Thus, the compiler is in your exe - it can't be screwed up by updating DX. That was one of the reasons why D3DX is statically linked...

Wouldn't it be rather easy to get the same thing with Open GL 2.0 by just having precompiled shaders shipping with the game ?

And then just a "problems ? use default .." option in the game.

Come to think of it, shouldn't this be even more foolproof because it won't be affected by potential bugs in new drivers ? (since it compiles directly to the hardware)

Although this would of course only be valid for hardware that already existed when the game was released.
 
....and there wouldn't be any benefits from updated compilers in the drivers. That sort of defeats much of the reason for hardware-specific compiling anyway.
 
Chalnoth said:
....and there wouldn't be any benefits from updated compilers in the drivers. That sort of defeats much of the reason for hardware-specific compiling anyway.

I was thinking more along the lines of a fallback in case of problems. That is, the default would be to use the updated compilers in the drivers and the users would have a "failsafe" mode to use in case of problems.
 
darkblu said:
NeARAZ said:
darkblu said:
what happens with your shaders when the next dx HLSL compiler exibits a certain bug which screws your shader x (and probably y and z too) - you do what:
The thing is, it can't screw it up. HLSL compiler is part of D3DX, and that is statically linked to your app. Thus, the compiler is in your exe - it can't be screwed up by updating DX. That was one of the reasons why D3DX is statically linked...

doh, didn't realise the compiler was part of the d3dx, my bad.

You could of course, ship fxc.exe with your app and do a "build" during user installation.

Blech.
 
NeARAZ said:
The thing is, it can't screw it up. HLSL compiler is part of D3DX, and that is statically linked to your app. Thus, the compiler is in your exe - it can't be screwed up by updating DX. That was one of the reasons why D3DX is statically linked...

Which adds half a meg of bloat to your exe. Not a problem for game developers, but a problem for demo guys like me when my executables go from like 200kb to 800kb simply by using HLSL.
 
Humus said:
NeARAZ said:
The thing is, it can't screw it up. HLSL compiler is part of D3DX, and that is statically linked to your app. Thus, the compiler is in your exe - it can't be screwed up by updating DX. That was one of the reasons why D3DX is statically linked...
Which adds half a meg of bloat to your exe. Not a problem for game developers, but a problem for demo guys like me when my executables go from like 200kb to 800kb simply by using HLSL.
I don't like that either - no chance for a 64kb intro and D3DX at the same time! :(
What I'd really like would be static libs for games and other "production" stuff, and dll's that are included in DX runtime as well. Would add several megabytes to the runtime, though...
 
NeARAZ said:
One for PC, one for PS2, one for GC, and even one more for X-Box (because with X-Box you have access to much lower level than with DX on PC). You just can't fit one OpenGL on all these hardware plarforms (yeah you can use it on different OSes on PC, but that's where it ends).

You will need a version of the compiler for each target. But that compiler only needs to be written once. Seems to me that Sony should be interested in making a Open GL 2.0 compiler for the PS3 f.e, but who knows.
 
Hmm, your options aren't strictly fair. I assume when you say OpenGL2.0 your mean the HLSL as the former is a long way away, and the later only exists as an extension (ratified yet ?). Given this I'd say right now Dx9 is by far the more elegant solution as its not comprised of a mass of extensions and annoying legacy support (from the HW perspective). Of course OGL has the advantages of cross platorm support etc, but in terms being a clean API we need to wait for full OGL2.0 and then preferably only in its "pure" form.

I must admit to being a bit confused about some of the critisism being leveled at MS's HLSL. Yes there are some minor issues with the intermediate language approach used, but no, these do not result in pathological results on any existing HW. Oh, unless you include a bug in the summer release that generate non optimal result when compiling a shader down to Dx8.0 level HW, I'll refrain from commenting on why I think someone needs HLSL to write a one line assembler program.

John.
 
Bjorn said:
You will need a version of the compiler for each target. But that compiler only needs to be written once. Seems to me that Sony should be interested in making a Open GL 2.0 compiler for the PS3 f.e, but who knows.
Well, by "codepaths" I didn't mean "shader codepaths", but whole rendering codepaths. That is, you don't use OpenGL on PS2 nor the GameCube. You can't - it's too different architecture. In fact, you don't use DX on X-box either - you can use a similar API, but you also can dig into much deeper things (what's essentially driver's work on PC). That's the same with all consoles - you have pretty direct access to the hardware.
So, in the end you WILL have differnent renderer backends for different hardware platforms. You can't have DX on all of them, and you can't have OpenGL on all of them.
 
Can you explain why OpenGL doesn't work on Cube or PS2? Doesn't Sony in fact, have an implementation of OpenGL for the PS2?

Sure, you might need to add extensions to expose native features, and you might not reach the efficiency of some Glide-like to-the-metal API, but I find it hard to believe that OpenGL simply "can't be implemented" on those platforms.
 
Back
Top