Which API is better?

Which API is Better?

  • DirectX9 is more elegant, easier to program

    Votes: 0 0.0%
  • Both about the same

    Votes: 0 0.0%
  • I use DirectX mainly because of market size and MS is behind it

    Votes: 0 0.0%

  • Total voters
    329
I was being sarcastic, I think there is no such thing as a futureproof API at the moment ... for the moment a new API for each new generation of hardware plain makes sense IMO.

OpenGL worked a lot better when hardware was chasing it, now OpenGL is chasing the hardware.
 
MfA said:
I was being sarcastic, I think there is no such thing as a futureproof API at the moment ... for the moment a new API for each new generation of hardware plain makes sense IMO.

OpenGL worked a lot better when hardware was chasing it, now OpenGL is chasing the hardware.
I disagree. First of all, core OpenGL has been behind for a number of years now, and yet it's still relatively easy to use due to the extensions interface. And with OpenGL 2.0, GL is taking a huge step in the right direction for graphics hardware: hardware-specific compilation of a HLSL. This alone takes it a quantum leap beyond DirectX.

And I also feel that graphics hardware interfaces aren't going to be changing much at all in the near future. Since we're right at the threshold of generalized computing, there just isn't much left that needs to be added to the interfaces. So, with the continuation of ARB, EXT, and vendor-specific extensions, OpenGL will remain modern. I just hope that Microsoft takes a step in the right direction with their next iteration of DirectX.
 
It used to be that individual APIs were chasing HW fixed-function features that were sprouting every few months. Now with most HW moving towards programmability, and with the introduction of a standard HLSL in OGL, I expect that the vast majority of any future additions will simply be in the "library" area of built-in shader functions. e.g. hardware Perlin noise(), etc.

Eventually, it will be more like the situation with CPUs: C and STDIO/POSIX standardized, but new CPUs add capability which is exposed through library functions.
 
It might be a standard HLSL, but it isnt exactly what I would call ideal.

Still SISO ... no surface subdivision, and even if it were added it would be in another specialized shader stage. I think hardware wise unification makes sense, the potential that gives for far more flexible programming as well shouldnt be wasted.

I agree about the library functions, I think things like perspective correct parameter interpolation for a given barycentric coordinate and texture sampling should be among the library functions though.
 
MfA said:
It might be a standard HLSL, but it isnt exactly what I would call ideal.
It'd call it an ideal way to approach HLSL. Obviously it's not complete. And I don't think adding surface subdivision would be a significant change to the HLSL, particularly if the hardware is unified.
 
About the driver/compiler thing: Isn't a DX9 driver essentially a simple compiler as well? And while a GLSlang compiler would require more work, it would also allow you to optimize things much better, as you have complete info about the high-level source code. The threshold is higher, but so are the gains as well.

I don't think a badly-written GLSlang compiler would perform much worse than a reasonably well written DX9 driver. Or am I missing something?
 
A DX9 driver does have to compile shaders, yes.

How easily a DX9 driver can optimize shaders as it compiles them depends on two main factors:
1. How well-optimized the assembly already is.
2. How well the assembly language maps to the internal hardware.

A GLSlang compiler doesn't have these limitations, since there is no intermediate assembly, and has the further bonus of having developer code remain at a higher level, which gives more information to the compiler, which in turn makes optimization easier.

Basically, I'd say that writing a basic DX9 shader compiler is easier than writing a basic GLSlang compiler.

But optimizing a GLSlang compiler should be easier than optimizing a DX9 shader compiler.

Additionally, since there are bound to be example implementations of GLSlang, the challenge in getting that basic GLSlang compiler out is greatly reduced.
 
Chalnoth said:
A DX9 driver does have to compile shaders, yes.

How easily a DX9 driver can optimize shaders as it compiles them depends on two main factors:
1. How well-optimized the assembly already is.
2. How well the assembly language maps to the internal hardware.

A GLSlang compiler doesn't have these limitations, since there is no intermediate assembly, and has the further bonus of having developer code remain at a higher level, which gives more information to the compiler, which in turn makes optimization easier.

Basically, I'd say that writing a basic DX9 shader compiler is easier than writing a basic GLSlang compiler.

But optimizing a GLSlang compiler should be easier than optimizing a DX9 shader compiler.

Additionally, since there are bound to be example implementations of GLSlang, the challenge in getting that basic GLSlang compiler out is greatly reduced.

Hmm, could write reems on this, needless to I don't entirely agree with various arguments being put forward for intermediat vs direct compilation... but damn have to go and do stuff, maybe later....

John.
 
How well does the disassembler from the shaderx book work?

In some ways Java bytecode is low level too, but in other ways it is no more low level than Java code itself ...
 
Rambling mode on.

Breaking this into two bits...

Why is the OGL2.0 approach to HLSL parsing not correct at this time ?
1) IHV's can individually introduce there own unique bugs
2) IHV's can individually tweak the syntax ("illegally") for there own devious reasons
3) For the above reasons, any shader thats successfully compiled on your own favorate dev systems is not gaurenteed to compile on all HW in the field.
4) Parsing of the HLSL will add time to runtime compilation

A seperate question, is there any concept of a profile that allows you to compile for a known target? As far as I know its up to the app to queuery capabilities. This is important given that no target HW currently support much of that required for true OGL2.0 support.

Why is M$'s approach of compiling to an intermediate format better?
1) Parser is owned by one "person", bugs and work arounds are common to all HW/Drivers
2) Syntax is fixed and unchangeable by IHV's
3) Shaders are compiled to defined targets, there is a high likelyhood that they will run on all HW in the field (not sure that WHQL has managed to achieve this ideal yet!)
4) The minimum amount of work is done at runtime (outside of offline HW targets).

Whats wrong with the M$ approach at the moment?
1) Target level mean that "optimisations" may be applied that are only there to get around restrictions in the target (e.g. texdr can be re-ordered to workaround sequence limit imposed by model).
2) There's some issues currently with unnecessary expansion of macros

All the discussion about intermediat formats removing optimisation opportunities are invalidated if the correct intermediat representation is used. This generally means that, as long as basic resoureces assumed by the model exceed or are nuetral to the target system and that the intermediate representation accuratley reflects what was originally supplied, you can always produce an optimal result at the backend.
This means things like preserving conditional statements in their full form, not reordering based on assumed latencies, not using temp registers instead use named and annotated variables/arrays etc etc.

Dx9 doesn't yet do all this, however there are no targets available that can truely support, say, arbitrary temp counts. If you look at the 3.0 asm format you'll notice that is does a bit more of the above, while retaining the defined target approach to life.

Basically, I think the ARB got this aspect of the OGL2.0 design wrong, other that its fine (ish).

Rambling mode off.

John
(phew, think thats the longest post I've ever made!)
 
Sorry if the following seems a bit negative, that is not the intention.

JohnH said:
Why is the OGL2.0 approach to HLSL parsing not correct at this time ?
1) IHV's can individually introduce there own unique bugs

As with a DX9 implementation, this depends as much on the compiler as on the hardware. For example, a shader that uses multiple render targets will break on NVx hardware.

2) IHV's can individually tweak the syntax ("illegally") for there own devious reasons

You mean, like they do right now with DX9?

3) For the above reasons, any shader thats successfully compiled on your own favorate dev systems is not gaurenteed to compile on all HW in the field.

Like, when your shader that uses multiple render targets works just fine on your R3x0?

4) Parsing of the HLSL will add time to runtime compilation

Isn't that why all DX9 shaders have to be initialized before execution, so they can be compiled by the driver up front?

A seperate question, is there any concept of a profile that allows you to compile for a known target? As far as I know its up to the app to queuery capabilities. This is important given that no target HW currently support much of that required for true OGL2.0 support.

Again, what is different with DX9?

Why is M$'s approach of compiling to an intermediate format better?
1) Parser is owned by one "person", bugs and work arounds are common to all HW/Drivers

... depending on the extensions you use and the hardware you use to run it.

2) Syntax is fixed and unchangeable by IHV's

You mean, Microsoft has to make the extension, instead of the IHV? (And they do so anyway, so what's the point? It being endorsed by M$?

3) Shaders are compiled to defined targets, there is a high likelyhood that they will run on all HW in the field (not sure that WHQL has managed to achieve this ideal yet!)

... as long as you don't use any function that is not supported by all IHV's in the exact same way.

4) The minimum amount of work is done at runtime (outside of offline HW targets).

JIT compilers? Yes, both.

Whats wrong with the M$ approach at the moment?
1) Target level mean that "optimisations" may be applied that are only there to get around restrictions in the target (e.g. texdr can be re-ordered to workaround sequence limit imposed by model).
2) There's some issues currently with unnecessary expansion of macros

Yes, OGL2.0 is not as neat and straight as they would want as well.

All the discussion about intermediat formats removing optimisation opportunities are invalidated if the correct intermediat representation is used. This generally means that, as long as basic resoureces assumed by the model exceed or are nuetral to the target system and that the intermediate representation accuratley reflects what was originally supplied, you can always produce an optimal result at the backend.
This means things like preserving conditional statements in their full form, not reordering based on assumed latencies, not using temp registers instead use named and annotated variables/arrays etc etc.

When it looks and works as intended by the developer, all is well. Any IHV is allowed any means to improve the preformance as long that is not compromised.

Dx9 doesn't yet do all this, however there are no targets available that can truely support, say, arbitrary temp counts. If you look at the 3.0 asm format you'll notice that is does a bit more of the above, while retaining the defined target approach to life.

Basically, I think the ARB got this aspect of the OGL2.0 design wrong, other that its fine (ish).

Sorry for the criticism, it's not personal, your post was just a good way to give some feedback .

;-)
 
JohnH said:
Why is the OGL2.0 approach to HLSL parsing not correct at this time ?
1) IHV's can individually introduce there own unique bugs
2) IHV's can individually tweak the syntax ("illegally") for there own devious reasons
3) For the above reasons, any shader thats successfully compiled on your own favorate dev systems is not gaurenteed to compile on all HW in the field.
4) Parsing of the HLSL will add time to runtime compilation

1,3,4 have been discussed to death in several lengthy threads already. I and many others don't believe 1 is much of a problem, 4 is easily solved, and 3 isn't anything new (buglessness have never been a guarantee). 2 is wrong, unless you know some inconsistency in the spec?

JohnH said:
Why is M$'s approach of compiling to an intermediate format better?
1) Parser is owned by one "person", bugs and work arounds are common to all HW/Drivers
2) Syntax is fixed and unchangeable by IHV's
3) Shaders are compiled to defined targets, there is a high likelyhood that they will run on all HW in the field (not sure that WHQL has managed to achieve this ideal yet!)
4) The minimum amount of work is done at runtime (outside of offline HW targets).

1 also means that bugs can completely halt your development or force you to ditch certain visual attributes for many months until MS update their runtime. 2 is bad thing, extensibility is desirable. 3 sucks, targets other than the hardware will be suboptimal, nor does it guarantee or make it significantly more likely (or at all) that it will run correctly on all hardware.

JohnH said:
Whats wrong with the M$ approach at the moment?
1) Target level mean that "optimisations" may be applied that are only there to get around restrictions in the target (e.g. texdr can be re-ordered to workaround sequence limit imposed by model).
2) There's some issues currently with unnecessary expansion of macros

All the discussion about intermediat formats removing optimisation opportunities are invalidated if the correct intermediat representation is used. This generally means that, as long as basic resoureces assumed by the model exceed or are nuetral to the target system and that the intermediate representation accuratley reflects what was originally supplied, you can always produce an optimal result at the backend.
This means things like preserving conditional statements in their full form, not reordering based on assumed latencies, not using temp registers instead use named and annotated variables/arrays etc etc.

Dx9 doesn't yet do all this, however there are no targets available that can truely support, say, arbitrary temp counts. If you look at the 3.0 asm format you'll notice that is does a bit more of the above, while retaining the defined target approach to life.

Basically, I think the ARB got this aspect of the OGL2.0 design wrong, other that its fine (ish).

Rambling mode off.

John
(phew, think thats the longest post I've ever made!)

The only way to get a good enough intermediate format would be if it's completely free from restrictions. Basically, to get around all limitations you will end up with a MS compiler that does nothing but parsing.
 
Xmas said:
MfA said:
In some ways Java bytecode is low level too, but in other ways it is no more low level than Java code itself ...
Could you elaborate?

Java bytecode is trivial to decompile, it basically has all the information from the high level code you would want for optimization. This is mostly because it uses a stack machine without all the complexities of register allocation/etc, only the most basic of optimizations are performed at this level.
 
JohnH said:
1) IHV's can individually introduce there own unique bugs
2) IHV's can individually tweak the syntax ("illegally") for there own devious reasons
3) For the above reasons, any shader thats successfully compiled on your own favorate dev systems is not gaurenteed to compile on all HW in the field.
4) Parsing of the HLSL will add time to runtime compilation

Parsing is trivial. It's a done deal. It took me approximately 3 hours to take a C++ grammar and modify it to parse Cg and GLSlang. ARB already provides a YACC grammar for GLSlang that is likely to be used by implementors. This particular issue is not worrisome.

As for 3rd party syntax extensions, this is far more likely with a public intermediate representation, since anyone can write a compiler front-end in any language they want to generate the intermediate representation. NVidia could provide a HLSL that is "ML like" (functional) and ATI could provide one that is XML-ish.


Why is M$'s approach of compiling to an intermediate format better?
1) Parser is owned by one "person", bugs and work arounds are common to all HW/Drivers
2) Syntax is fixed and unchangeable by IHV's
3) Shaders are compiled to defined targets, there is a high likelyhood that they will run on all HW in the field (not sure that WHQL has managed to achieve this ideal yet!)
4) The minimum amount of work is done at runtime (outside of offline HW targets).

Assumes parser bugs are the biggest issue with compilers, they aren't. Parsers are generated from automated tools which take LL or LR grammars and as long as your grammar is correct, you're parser will be, unless your YACC/PCCTS/ANTLR/CUP/etc generator is bugged.

Secondly, the syntax isn't fixed, because now multiple-front-end parsers can generate intermediate representation.

Third, parsing is the smallest part of the work in a compiler, so the fact that you've removed parsing out of the pipeline is irrelevent.


Whats wrong with the M$ approach at the moment?
1) Target level mean that "optimisations" may be applied that are only there to get around restrictions in the target (e.g. texdr can be re-ordered to workaround sequence limit imposed by model).
2) There's some issues currently with unnecessary expansion of macros

The fact is, Microsoft's so-called "intermediate representation" (DX9 assembly) cannot model many of the semantics that you may want to pass to the hardware for acceleration. How do you represent a Perlin noise() function, or a matrix transpose? Even normalization isn't passed to the HW, so if HW had a hardware normalizer built in, it becomes much more work (remember, minimal work neccessary?) for the driver to detect and optimize it.

The limitations exposed in their intermediate representation deny many optimizations that could be possible on more powerful hardware (loop,funcall limits, register limits, no type information at all!)




All the discussion about intermediat formats removing optimisation opportunities are invalidated if the correct intermediat representation is used.

Well, there's the rub. MS's "intermediate language" is nothing like the intermediate representations used in most compilers, be it Java, .NET CLR, C compilers, etc. Most compiler IR-reps use either Tree-IR, or 3/4-address code, and do not use actual register names, but use autogenerated labels which do not get assign registers until near the end.


You need to get away from what "could have been", and deal with what the reality is: The reality is, DX9 VS/PS 2.0/3.0 do not have the representational power that other intermediate representations have, and won't allow the kinds of optimizations that GLSlang's model will.

If Microsoft had an intermediate representation for 3D that was more like .NET CLR, and less like some low level assembly syntax, I might be tempted to agree with you. About a year ago, I was arguing on this board that MS should provide an interface whereby you can plug in source code, and get back a parse-tree or IR rep, and thereby allow the driver to do it's work based on more higher level information, instead of having to translate DX9 assembly.


But since all of your claimed benefits don't actually show up unless MS rethinks the way they're currently doing things, talking about how a hypothetically rich IR with separate parsing would be ideal is moot, since the reality is: DX9 FXC + assembly vs OGL2.0 HLSL
 
DiGuru said:
Sorry if the following seems a bit negative, that is not the intention.

JohnH said:
Why is the OGL2.0 approach to HLSL parsing not correct at this time ?
1) IHV's can individually introduce there own unique bugs

As with a DX9 implementation, this depends as much on the compiler as on the hardware. For example, a shader that uses multiple render targets will break on NVx hardware.
Not wrt to the HLSL parser, or the intermediate format, as the former is supplied by MS not the IHV, and the later is validated by MS so can't be changed.
2) IHV's can individually tweak the syntax ("illegally") for there own devious reasons

You mean, like they do right now with DX9?
IHV's _Cannot_ tweak the syntax of the HLSL as they do not have acces to it. They Also can't tweak the intermediate format as it won't get past validation.

3) For the above reasons, any shader thats successfully compiled on your own favorate dev systems is not gaurenteed to compile on all HW in the field.

Like, when your shader that uses multiple render targets works just fine on your R3x0?
Thats a seperate capabilities issue due to the "externalisation" of a capability that effects how you write your shader code. My personal view is that MRT's should have been roled into to 2.0 profile as well.

4) Parsing of the HLSL will add time to runtime compilation

Isn't that why all DX9 shaders have to be initialized before execution, so they can be compiled by the driver up front?
I have no idea what you mean here. Parsing to the intermediate format can be offline so has no need to impact on runtime.

A seperate question, is there any concept of a profile that allows you to compile for a known target? As far as I know its up to the app to queuery capabilities. This is important given that no target HW currently support much of that required for true OGL2.0 support.

Again, what is different with DX9?
When compiling in Dx9 you specify a target profile that if supported by the card it should run, its kind of a heavy caps check without the effort.

Why is M$'s approach of compiling to an intermediate format better?
1) Parser is owned by one "person", bugs and work arounds are common to all HW/Drivers

... depending on the extensions you use and the hardware you use to run it.
There are no extension in Dx9.

2) Syntax is fixed and unchangeable by IHV's

You mean, Microsoft has to make the extension, instead of the IHV? (And they do so anyway, so what's the point? It being endorsed by M$?
Again, there are no extension in Dx9.

3) Shaders are compiled to defined targets, there is a high likelyhood that they will run on all HW in the field (not sure that WHQL has managed to achieve this ideal yet!)

... as long as you don't use any function that is not supported by all IHV's in the exact same way.
The profiles are supposed prevent that from happening. The only thing missing in Dx9 is that things like MRT's aren't wrapped up in the 2.0 profile, which by the way they are for 3.0.

4) The minimum amount of work is done at runtime (outside of offline HW targets).

JIT compilers? Yes, both.
JIT yes, both, but OGL 2.0 is also JIP (just in time parsing), there is a considerable difference.

Sorry for the criticism, it's not personal, your post was just a good way to give some feedback .
;-)

These boards are here discussion, so no problem.

John.
 
JohnH said:
IHV's _Cannot_ tweak the syntax of the HLSL as they do not have acces to it. They Also can't tweak the intermediate format as it won't get past validation.
Let me get this straight, JohnH.

You like DX9 HLSL because IHV's can't cheat on it, because IHV's have in the past "modified" DX9 shaders?

Think about that for a second.
 
Chalnoth said:
JohnH said:
IHV's _Cannot_ tweak the syntax of the HLSL as they do not have acces to it. They Also can't tweak the intermediate format as it won't get past validation.
Let me get this straight, JohnH.

You like DX9 HLSL because IHV's can't cheat on it, because IHV's have in the past "modified" DX9 shaders?
Where did he write that?
 
JohnH said:
IHV's _Cannot_ tweak the syntax of the HLSL as they do not have acces to it. They Also can't tweak the intermediate format as it won't get past validation.

Incorrect. IHVs can ship command line compilers with whatever syntax they want and developers can use these compilers to generate "intermediate format", the same as FXC.exe today. NVidia's Cg is proof, since it can generate DX8 and DX9 code, yet they have their own HLSL syntax extensions.

A publically open intermediate format spec encourages more programming language bifurication, not less. It encourages language competition. The fact that Sun controls Java couldn't stop over 100+ languages being developed to run on top of the Java VM because the Java intermediate representation is open.

Again, the parser isn't the most expensive part of the compilation process. And due to the declarative nature of grammars, and decade mature parser generators, it is unlikely to be the focus of most of the bugs.
 
Back
Top