Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
DemoCoder said:
Irrelevent. MS's compiler is equivalent to the front end part of a compiler. It generates intermediate code that isn't runnable on HW. Most of the work is in the backend, and therefore NVidia STILL had to implement YET ANOTHER COMPILER to RECOMPILE PS_2_X/A code from MS's compiler.

Right.

And yet, this compiler exists, no? Where's their "single, easy to optimize GLSL compiler?"
 
Joe DeFuria said:
....or that nVidia has demonstrated that their architecture is not particularly suited to the DX9 standard.

... because the DX9 standard didn't make sense.

Joe DeFuria said:
The FX series also had major performance problmes with standard GL ARB shader code. Compiling isn't an issue here...it's the specification.

Hint: It's also an assembly language.

The R9700 also has major problems with certain GL_ARB_fragment_program code.

And we'll also see how long it takes for the competing IHVs to release GLSLang compilers....

Other than the fact that the DX9 model (entire development model) has allowed products to actually ship with HLSL compiler support, whereas it's nowhere to be seen in GL, you mean.

Hint: HLSL and DX9 has been available and supported for quite some time now....GLSL...not.


Well, it turns out 3DLabs already provides GLSL support and have done so for a while. There goes the idea that only "the big two" that would have the enormous resources it would take to implement it.
 
Humus said:
3DLabs already provides GLSL support and have done so for a while. There goes the idea that only "the big two" that would have the enormous resources it would take to implement it.

Again....

THINK CONSUMUER SPACE.
 
Joe DeFuria said:
Humus said:
3DLabs already provides GLSL support and have done so for a while. There goes the idea that only "the big two" that would have the enormous resources it would take to implement it.

Again....

THINK CONSUMUER SPACE.
Hmm - I would have expected that the professional space that 3dlabs has traditionally been catering to would be even less tolerant of errors/unexpected behavior than the consumer space.
 
Joe,
Are you just dense? GLSL spec wasn't approved until recently, that, more than anything, is why we don't see any GLSL compilers shipping. Not because "they are difficult to write" How could they have been written at the same time as DX9 when OGL2.0 hasn't even been formally adopted yet? Oh yes, I see, more evidence that OGL2.0 is inferior, because they took longer to examine HLSL problem and come up with a superior solution? If Nvidia didn't push Cg, it's not even likely DX9 would have HLSL builtin (it doesn't really, it was a last minute hack to D3DX)


But more then that, all along you have been claiming that DX9 drivers are easier to write because MS provides the compiler, but the fact is, you are wrong, since IHVs must still write compilers.

Now you're trying to say "well, it's just NVidia's problem." Well, it's not. The more flexible the HW, the more of a problem it will be. The problem is going to get worse, not better.

Can you even logically explain your anti-OpenGL2.0 argument? I don't quite get it. DX9 drivers are still difficult to write, and still require optimizers, which become more complicated as your pipeline becomes more flexible. Well, this is the same with OpenGL2.0 I don't see the inherent advantage of DX9.

Then you claim superior stability, but since OpenGL ICDs undergo way more scrutiny because of their association with the DCC market, that's also a fallacy. ATI and NVidia have to make sure their drivers don't fubar up 3dStudio, Maya, Autodesk, Solidworks, etc not to mention Medical imaging, government defense, etc.

Then you claim superior user experience, but OGL game engines dominated the market long before DirectX, had superior performance and features (until DX9) Briefly, DX9 has a slight lead until OGL2.0 comes out. And just look at DirectX-Next. You think DirectX-Next drivers are gonna be simple to write?

Then you claim consistency, yet DirectX forces users to constantly upgrade their runtime (why does every DX game run DXSETUP and often force a reboot of my computer?), and a DX driver for a well known flexible card (NV3x) took extraordinary efforts to build, and hence, delivered a considerable speed boost (I dare say, until the compiler was implemented, the card was unusable for DX9 games)

Your answer is "well, this is only NVidia". My answer is, time will tell, but if you simply look at CPUs and the direction that GPUs are moving with flexibility towards general purpose pipelines, it is far more likely that future DX drivers will be much harder to author, not easier.

In any case, DirectX9 doesn't free you from having to implement compilation in the driver, so the entire argument about OGL2.0's requirement that drivers do compilation is essentially moot.
 
arjan de lumens said:
Hmm - I would have expected that the professional space that 3dlabs has traditionally been catering to would be even less tolerant of errors/unexpected behavior than the consumer space.


Yep. I recall that Boeing chose workstations for their employees when designing the 777 based on the precision of the OGL drivers. Errors were not tolerated. This is in general true of the CAD market. No one wants a FireGL or Quadro card in their workstation if that requires them to constantly recheck coordinates to make sure points or lines they placed are not in the wrong place. 3dLabs makes a big deal out of this, since they have even less rendering defects than ATI or NVidia.
 
arjan de lumens said:
Hmm - I would have expected that the professional space that 3dlabs has traditionally been catering to would be even less tolerant of errors/unexpected behavior than the consumer space.

Yes..which is OK since it's a relatively SMALL group of users (and applications) vs. the consumer space.
 
DemoCoder said:
Joe,
Are you just dense?

You know, DC, it's just not worth it conversing with someone who just continues to be an ass. I'll just answer your first statement, and if you can refrain from the name calling, perhaps we can continue.

GLSL spec wasn't approved until recently, that, more than anything, is why we don't see any GLSL compilers shipping.

If the entire GL model is so superior in every way, then why has it taken so long for the GLSL spec to be approved relative to the DX model
 
DemoCoder said:
Yep. I recall that Boeing chose workstations for their employees when designing the 777 based on the precision of the OGL drivers. Errors were not tolerated. This is in general true of the CAD market. No one wants a FireGL or Quadro card in their workstation if that requires them to constantly recheck coordinates to make sure points or lines they placed are not in the wrong place. 3dLabs makes a big deal out of this, since they have even less rendering defects than ATI or NVidia.

So now we're confusing accuracy with cross-platform consistency?

Do we understand the different needs of "professional" and "consumer" hardware needs yet?
 
Joe DeFuria said:
If the entire GL model is so superior in every way, then why has it taken so long for the GLSL spec to be approved relative to the DX model
That's part of the reason why it is superior. It was more well thought-out.
 
Joe DeFuria said:
So now we're confusing accuracy with cross-platform consistency?

Do we understand the different needs of "professional" and "consumer" hardware needs yet?
Of course the needs are different. With a professional card, the tolerances for drivers that don't do as they should is much, much smaller. Why do you think OpenGL is used often in the professional space?

Side note:
The reason 3DLabs has a GLSlang compiler is because they wrote GLSlang.
 
I just saw the review of the new 3DMark03. It seems, that nVidia searched for sequences of instructions and replaced them with optimized versions. Well, we knew that from Shadermark and UT as well.

And now the sequence of those instructions has changed, their optimizations are gone. No surprise there. If you write a driver as something that blindly maps p-code instructions to hardware, you have no other choice. And your optimizations are seen as cheating when they only target sequences of instructions found in specific applications or benchmarks.

But everyone uses their own instruction sequences. So, to optimize something this way, you need to take the 'assembly' generated for that specific application and replace chunks of that code for better ones. Essentially, you ask the hardware vendors to optimize each and every application individually. And that is considered cheating. Help!

When comparing Java bytecode with DX9 assembly, the largest difference is in the amount of information that is in the intermediate format. The Java bytecode contains all information you would want and need to generate optimal code for the target platform.

Even when you use a real compiler in your driver to generate an optimal target code, your best bet to significantly improve performance when using a fixed intermediate format that contains no other information is changing the hardware to run it directly. A better driver can help a bit, but don't expect large gains.

Looking at the history of Wintel computing, I would assume, that Microsoft thinks that it is in their best interest to force hardware manufacturers to create hardware that runs their 'assembly code' directly. The same has happened with CPU's, after all. That's how you do it.

It is in the best interest of the consumer that that does not happen before there is no more gain in creating hardware with new and better ways to create gorgeous graphics. And graphic hardware has still a long way to go before we get there.
 
Chalnoth said:
Side note:
The reason 3DLabs has a GLSlang compiler is because they wrote GLSlang.

Yes, it's true that GLSL was originally a 3DLabs proposal, though the language have changed quite a lot from the original proposal. GLSL being their love child of course makes their implementation effort a little more enthusiastic, plus that they had a little of a head start, so I'm not surprised that they are first out.
The point however, that completely flew over Joe's head, is that a small vendor like 3DLabs didn't have any problem getting a GLSL compiler out of the door, which should remove any concerns that XGI or other small vendors would have major problems implementing GLSL.
 
Humus said:
No, the problem is the intermediate language. You can't optimize for both architectures with a common profile. Yes, the GFFX hardware is slow too, but that's just another problem. Both vendor's hardware could be faster had they had the opportunity to compile the code themselves, though the difference would probably be larger on the GFFX side.
I would be prepared to bet a large sum of money that you are completly wrong here (well within a few percentage points).

John.
 
Humus said:
Joe DeFuria said:
....or that nVidia has demonstrated that their architecture is not particularly suited to the DX9 standard.

... because the DX9 standard didn't make sense.
That is only your opinion. NV didn't like it for what should now be obvious reason to everyone here.

Joe DeFuria said:
The FX series also had major performance problmes with standard GL ARB shader code. Compiling isn't an issue here...it's the specification.

Hint: It's also an assembly language.

The R9700 also has major problems with certain GL_ARB_fragment_program code.
Such as ? Last I heard DoomIII didn't have an issues with ARB frag on ATi HW. Seriously, out of interrest what sort of problems ?

And we'll also see how long it takes for the competing IHVs to release GLSLang compilers....

Other than the fact that the DX9 model (entire development model) has allowed products to actually ship with HLSL compiler support, whereas it's nowhere to be seen in GL, you mean.

Hint: HLSL and DX9 has been available and supported for quite some time now....GLSL...not.


Well, it turns out 3DLabs already provides GLSL support and have done so for a while. There goes the idea that only "the big two" that would have the enormous resources it would take to implement it.

I'd have a closer look at what it actually manages to compile to HW before singing its praises! (No offense intended to the 3Dlabs guys).

John.
 
JohnH said:
Humus said:
... because the DX9 standard didn't make sense.
That is only your opinion. NV didn't like it for what should now be obvious reason to everyone here.
DX9 intermediate representation is, from a technical POV, flawed. This has nothing to do with either ATI or NVidia, or PVR for that matter.

The R9700 also has major problems with certain GL_ARB_fragment_program code.
Such as ? Last I heard DoomIII didn't have an issues with ARB frag on ATi HW. Seriously, out of interrest what sort of problems ?
Swizzles and gradient instructions come to my mind. Why Doom3?


I'd have a closer look at what it actually manages to compile to HW before singing its praises! (No offense intended to the 3Dlabs guys).

John.
3Dlabs' current implementation is somewhat flawed anyway because of hardware limitations. But it will certainly accept more complex shaders than it does in DX...
 
Xmas said:
JohnH said:
Humus said:
... because the DX9 standard didn't make sense.
That is only your opinion. NV didn't like it for what should now be obvious reason to everyone here.
DX9 intermediate representation is, from a technical POV, flawed. This has nothing to do with either ATI or NVidia, or PVR for that matter.
Its only flawed if you try to write a shader that exceeds a specific profile, if thats the case use a higher specification profile, HW not support one ? Well thats what the profiles are there for.
What is flawed, in the majority of this thread, is the assumption that its a good idea to be able to try and compile and arbitrary shader to an arbitrary peice of HW without any way of telling in advance if its going to succeed, or if it succeeds if it will run at a reasonable speed. Hey there was even the beginning of a discussion of how to fix this but that seemed to fail to go forward for some reason..
ATi and NV are only mentioned in my post as the issues that the later have are obviously being used as some odd kind of indication of "whats wrong with Dx9". In my opinion Dx9 isn't perfect, but it also isn't actually that bad either, yes it could be improved, but then again its seems obvious to me that so could GLSL.
The R9700 also has major problems with certain GL_ARB_fragment_program code.
Such as ? Last I heard DoomIII didn't have an issues with ARB frag on ATi HW. Seriously, out of interrest what sort of problems ?
Swizzles and gradient instructions come to my mind. Why Doom3?
Ok, you mean it doesn't support them ? I only referred to doom III as one of the views being pushed around in this thread is that the use on an intermediate assembler format inherently means that you won't get good performance, DIII results seem to indicate otherwise on ATi HW when using the im assembler based arb frag ext.

I'd have a closer look at what it actually manages to compile to HW before singing its praises! (No offense intended to the 3Dlabs guys).

John.
3Dlabs' current implementation is somewhat flawed anyway because of hardware limitations. But it will certainly accept more complex shaders than it does in DX...

Yes, but its also very easy to write a shder that won't work, this will happen a lot on this HW once PS/VS3.0 part materialise.

Anyway, after to many joint jolting reply sessions, ireally exiting thread this time (honest).

John.
 
JohnH said:
Xmas said:
DX9 intermediate representation is, from a technical POV, flawed. This has nothing to do with either ATI or NVidia, or PVR for that matter.
Its only flawed if you try to write a shader that exceeds a specific profile, if thats the case use a higher specification profile, HW not support one ? Well thats what the profiles are there for.
No, it is flawed because it drops vital information on what that shader is really supposed to do. And the profiles are flawed because they a) are based on the flawed IR, b) do not accurately represent HW limitations and capabilities and are therefore much too limited, c) there are no 3.0 profiles yet, and d) the best profile is not automatically chosen at runtime.

What is flawed, in the majority of this thread, is the assumption that its a good idea to be able to try and compile and arbitrary shader to an arbitrary peice of HW without any way of telling in advance if its going to succeed, or if it succeeds if it will run at a reasonable speed. Hey there was even the beginning of a discussion of how to fix this but that seemed to fail to go forward for some reason..
I think validation tools for runtime compilation are the way to go. It's a small effort for the IHVs, but it doesn't have the flaws of profiles: no flawed IR, accurate representation of the hardware, automatically runs the best way possible.
I realize there is a problem with an IHV presenting hardware that is more limited than the current "least-capable" shader (meaning GLslang capable) hardware. This will be a problem for now, while there is no "legacy shader hardware" to target. But maybe in one or two years developers have decided on e.g. Volari as their lowest target, and any new hardware will be more capable than that.

ATi and NV are only mentioned in my post as the issues that the later have are obviously being used as some odd kind of indication of "whats wrong with Dx9". In my opinion Dx9 isn't perfect, but it also isn't actually that bad either, yes it could be improved, but then again its seems obvious to me that so could GLSL.
Certainly GLSL isn't perfect. Two things I'd like to be included are half floats and an interface for passing "object code" to the driver. But I think the model behind it is superior to that behind DX9.
 
Even if we would all agree that one of them is better than the other one, would that mean that more applications will be written that support it? Who makes the decision to use DX9 or OGL? The developer (that wants the best tool for the job) or the management (that chooses the one with the largest market share)?

And what would be the best way to assure better graphics in the future anyway, the stong hand of Microsoft, the committee that develops OGL or the innovations of the hardware producers? Anyway, does it matter? I think it does. But how many of us are going to use the one we think best?

Even if we all did, what impact would it have?

I think we would see a strong reaction from all parties involved, as it is a very hot item at the moment. Even if only the blame for the bad performance of the NV3x wouldn't rest squarely on the shoulders of nVidia, but it is recognized that DX9 is to blame as well.

I think it is superb, that ATi makes such great and fast chips at the moment. And running DX9 almost directly in hardware assured their comeback and put them on top. It was just what they (and we all) needed. Competition. The developments from the sole market leader were getting a bit dull.

Competition is good. As are innovative ways to create better graphics. Make sure the other ones try to improve upon the market leader.

Yes, that means that there will be a lot of different hardware. You cannot be sure if your game runs as well on all of them. More work for the developer that creates the game engine. But it does mean, that you can make an even better and much more beautiful game next year. The artwork and gameplay are much more important anyway. And we all want hardware that does not limit creativity. Even if that means that you don't know up front how it will render your graphics.

Agreed?
 
Back
Top