HLSL 'Compiler Hints' - Fragmenting DX9?

Dave Baumann

Gamerscore Wh...
Moderator
Legend
There's been some questions raised over the following in our short Half Life 2 Q&A:

Were the Shaders in the benchmark compiled with the latest version of HLSL (i.e. the version that orders the assembly for a more efficient use of the FX's register limitations)?

[Brian Jacobson] The ones used by the mixed mode path were.

Now, the question arose was why do this only for the FX path, and not the general DX path. It transpires that, as far as I can tell, the HLSL compiler hasn't had global updates for GeForce FX's register use, but now contains hints as to which version it should compile to. I was speaking to someone to seek further clarification on this and this was their responce:

The ps_2_x profile for HLSL compiler can take advantage of the additional features available in ps_2_x. ps_2_x serves as a convenient hint for the HLSL compiler to generate code that's optimal for NV3X hardware - since NV3X is the only current HW that has this feature.

If in future, another hardware becomes available that's not based on NV3X architecture and has it's own special optimization quirks - the likely model is that you'll see a new profile. Basically, profiles are also serving as target platform hints

Doesn't this seem a bit annoying? It seems as though this is fragmenting the purpose of HLSL somewhat.
 
DaveBaumann said:
Doesn't this seem a bit annoying? It seems as though this is fragmenting the purpose of HLSL somewhat.

Does it fragment the language ?

Cause i don't see what the problem is if it's only a compiler directive.
 
It's not fragmentation of HLSL, it's just that MS provides different compiler targets to suite different hardware. This also means that compiler will use features that are given in a new profile even if it doesn't need them and you don't specifically ask for them in your code (arbitrary swizzles for example). You have ps_2_0 target which is suited for ATI, now you have ps_2_a (not 2_x) target and there might be a ps_2_b sometime.
And if you tell compiler to compile to ps_2_a you are not guaranteed that compiled shader will even work on ATI...
 
Quite the opposite. Its justification for JIT compiling. Just don't do anything as silly as changing your HLSL syntax based on the backend.
 
RussSchultz said:
Quite the opposite. Its justification for JIT compiling. Just don't do anything as silly as changing your HLSL syntax based on the backend.

Agreed.

And I'd like to add that Microsoft should be the compiler owner, as they are now. I don't want to see nVidia (or ATI, or anyone else), take control of the compiler profiles.

Every IHV can and should work with MS to optimize compilers for the architecture, but MS needs to be able to control the quality of the outpout.
 
DaveBaumann said:
Doesn't this seem a bit annoying? It seems as though this is fragmenting the purpose of HLSL somewhat.

This is the best thing to do given the HLSL model of D3D.
In the Summer update SDK there's D3DXGetPixelShaderProfile which can be used get the best profile suited for your card.
So you pass that to D3DXCompileShader and don't care about it anymore.
If new card comes out, and a new SDK comes out with a new profile, you just relink the app and away you go.
 
Just out of curiosity. Why didn't Microsoft put the compiler itself in the DirectX COM package. By just updating DirectX 9, some games would automatically receive performance increases without the need to relink all the games.
 
Joe DeFuria said:
Every IHV can and should work with MS to optimize compilers for the architecture, but MS needs to be able to control the quality of the outpout.
I don't see why it should be any different than the rest of the driver architecture.

This is just one more part of the driver landscape. A bad compiler backend for Brand X shouldn't hurt brand Y--because brand Y has their own compiler backend.
 
For those of you that don't know, this is how the OpenGL shading language works: The high level shading language is the same for all IHVs but the compiler is part of the driver so the compiled code should be optimal for the architechture you're running on.

The downside of this is of course that having an entire high level compiler in the driver is a major burden on the IHVs . This is one of the reasons why glslang support is much less mature than the support for HLSL.

It is interesting that nvidia propagated against having the compiler n the driver, (we should all use a specific compiler we could test with to avoid bugs, the cg compiler presumably) but now almost exactly that is being touted as an advantage with the 50 series dets and the new ps_2_x hint for microsofts HLSL. I hardly think having Microsoft implementing a special path for each and every IHV wlill work in the long run.
 
sonix666 said:
Just out of curiosity. Why didn't Microsoft put the compiler itself in the DirectX COM package. By just updating DirectX 9, some games would automatically receive performance increases without the need to relink all the games.

Good question!
I'm wandering about this ever since DX9 got released...

One advantage of the current way that you can't break applications by updating the runtime, when the new compiler has bugs.
By relinking the app developer has the chance to check the result and circumvent the compiler bugs before releasing a new version of his app.
(There are new bugs in the Summer SDK...)
 
What I's like to see in DirectX in addition to the existing interface is a way to pass HLSL to a driver-integrated compiler which would turn the code directly into machine code. This way you can write shaders that are guaranteed to run on all PS2.0 hardware. Everything the MS HLSL compiler can compile to ps_2_0 will run - this is something GLslang lacks, there you can only verify that the shader runs on your hardware.
And on the other hand, the compiled shader can be optimized for every hardware, and you don't have to worry about API limits.
 
sonix666 said:
Just out of curiosity. Why didn't Microsoft put the compiler itself in the DirectX COM package. By just updating DirectX 9, some games would automatically receive performance increases without the need to relink all the games.

The update actually broke some shaders. Suddenly shaders that compiled flawlessly before had too many arithmetric instructions, etc.

Now imagine all of that happening after end-user installs an update to DirectX (which AFAIK still can't be uninstalled)...
 
Well if you're feeling paranoid yoo can argue that the drivers PS2.0/ARB_fragment_program compiler might have bugs.

I agree in general though. It would be great if there was a high level compiler that could compile HLSL and/or glslang into their respective assembly. Preferrably an open source one. glslangs concept of linking makes this more complex to implement but it's still doable.
 
RussSchultz said:
A bad compiler backend for Brand X shouldn't hurt brand Y--because brand Y has their own compiler backend.

Nor am I saying that.

A bad compiler (defined as compiling to code that isn't correct or what is intended, but what is deemed "acceptable" by the IHV) is bad for everyone.
 
RussSchultz said:
Quite the opposite. Its justification for JIT compiling. Just don't do anything as silly as changing your HLSL syntax based on the backend.
I see your point but are developers going to be terribly enthusiastic about shipping high-level shader source with their applications? I suppose they could use something akin to the public domain obfuscation program for C (which strips comments and replaces all identifiers with _NUMBER) but given the short length of programs and the need to use intrinsics that can't be renamed, the scope seems limited.
 
Joe DeFuria said:
A bad compiler (defined as compiling to code that isn't correct or what is intended, but what is deemed "acceptable" by the IHV) is bad for everyone.

The only thing needed is to ensure that the shader is mathematically and functionally equivalent and generates expected output. Having the HLSL compiler in the driver gives the driver much more possibilities for optimisations. That's much more valuable than some kind of tool to hunt cheaters. As long as reviewers check image quality we should be fine. The OpenGL way is the best way IMO.
 
Simon F said:
I see your point but are developers going to be terribly enthusiastic about shipping high-level shader source with their applications? I suppose they could use something akin to the public domain obfuscation program for C (which strips comments and replaces all identifiers with _NUMBER) but given the short length of programs and the need to use intrinsics that can't be renamed, the scope seems limited.

Easy problem to solve. Encrypt the shaders, or just store them in a password protected .rar file.
 
Humus said:
Easy problem to solve. Encrypt the shaders, or just store them in a password protected .rar file.
I guess that would help - it would take some time to reverse engineer the key.
 
Back
Top