HLSL 'Compiler Hints' - Fragmenting DX9?

Humus said:
The only thing needed is to ensure that the shader is mathematically and functionally equivalent and generates expected output.

And the only way to coming close to ensuring that, IMO, is to have one "independent" body controlling the compilers. (Independent meaning not an IHV).

Having the HLSL compiler in the driver gives the driver much more possibilities for optimisations.

Both good and bad. In the D3D market (lots and lots of games, consumer rather than professional audience) I'd rather the focus be put on consistency, stability, and less burdensome development for IHVs.

As long as reviewers check image quality we should be fine. The OpenGL way is the best way IMO.

I disagree. I think the GL model is fine for its market...(more prefessionally oriented). It has its pros and cons vs. the DirectX model, and in the consumer market, I think the cons outweigh the pros.
 
OpenGL actually does have conformance tests, programs that check the output of an OpenGL implementation to ensure it maintains the quality required by the OpenGL trademark. You won't get an implementation license unless you pass the conformance tests.

In practice the conformance test suite hasn't been updated in a while and isn't as useful as you might think, but the principle is sound, and work is being done to update the conformance tests to take modern features into account. The OpenGL 2.0 conformance tests obviously need to test glslang as well as more simple things like filtering.

As a side note, encrypting shaders obviously won't work since they need to be sent in plaintext to the driver anyway. Catching that plaintext in OpenGL is a 15 minute job if you utilize the existing source to gltrace. It gets more complex with d3d since AFAIK there's no available source for a passthrough d3d9.dll but it should be fairly simple to implement if you have experience with d3d.
 
GameCat said:
In practice the conformance test suite hasn't been updated in a while and isn't as useful as you might think, but the principle is sound...

Well, that's the whole problem. Of course the principle is sound. ;) That is, everyone plays fair, has infinite resources to build compilers in takes no short-cuts, and there are actual consequences for not conforming. But the reality is something else.

In principal the D3D model is sound too: because MS gets input from the IHVs, MS just creates as optimal a compiler for every architecture ad the IHVs would themselves. In practice, it'll be a little different.
 
Humus said:
Simon F said:
I see your point but are developers going to be terribly enthusiastic about shipping high-level shader source with their applications? I suppose they could use something akin to the public domain obfuscation program for C (which strips comments and replaces all identifiers with _NUMBER) but given the short length of programs and the need to use intrinsics that can't be renamed, the scope seems limited.

Easy problem to solve. Encrypt the shaders, or just store them in a password protected .rar file.

It is not so simple. Shader decryption should be handled by the driver, not the application. Otherwise it will be possible to "sniff" the decrypted shaders using a proxy OpenGL driver (like gltrace).

I think ARB should definitely start thinking about ways to protect IP (for instance by standardizing a strong encryption scheme).
 
Joe DeFuria said:
In principal the D3D model is sound too: because MS gets input from the IHVs, MS just creates as optimal a compiler for every architecture ad the IHVs would themselves. In practice, it'll be a little different.
But that's obviously not going to happen and so you get back the argument that the IHV should write their own compiler because they are the ones who knows their HW.

At the moment we have the situation that Microsoft's HLSL compiler can target two slightly different "P-CODE" models which then, presumably, are "assembled" by the IHV's driver into the native instructions. I doubt either of these models allows the driver to optimise fully. In fact, given the way constants are handled by D3D, I suspect it is impossible for some optimisations which would be obvious from the HLSL level, to subsequently be done in the driver.
 
Simon F said:
But that's obviously not going to happen and so you get back the argument that the IHV should write their own compiler because they are the ones who knows their HW.

Exactly my point.

Every IHV writing their own compiler for their own best interests though is obviously not going to produce the same results. So we get back to the argument of one body writing the compiler profiles to maintain result consistency.

And round and round it goes.

Again, my point is, both models have pros and cons. For the consumer market, I prefer the DirectX approach, because I think stability and consistency are more important for consumers as a whole.
 
Well, the conformance test suite has worked pretty well for a long time on diverse sets of hw. It doesn't catch all driver bugs obviously but at least having some form of formal testing prevents blatant disregard of the specs like nvidias "brilinear" filtering in D3D.

I agree with your main point though, OpenGL in general caters more for the user/programmer than the IHV (historically at least) while Direct3D is sort of the opposite. An example of this is that OpenGL has required software emulation of any core feature not supported by the hardware which obviously leads to more work for the people writing drivers but ensures you always will get a correct image, albeit slowly. The downside is that small IHVs like Matrox generally have crap OpenGL drivers...

It will be interesting to see how quickly the IHVs will bring out drivers with glslang suppotr, if it takes much longer for it to mature it will discourage a lot of people from using OpenGL.
 
GameCat said:
As a side note, encrypting shaders obviously won't work since they need to be sent in plaintext to the driver anyway. Catching that plaintext in OpenGL is a 15 minute job if you utilize the existing source to gltrace. It gets more complex with d3d since AFAIK there's no available source for a passthrough d3d9.dll but it should be fairly simple to implement if you have experience with d3d.
AFAIK 3DAnalyze can catch assembly shaders.

Joe DeFuria said:
Every IHV writing their own compiler for their own best interests though is obviously not going to produce the same results. So we get back to the argument of one body writing the compiler profiles to maintain result consistency.
I disagree. You're worried about IHV's being able to "cheat". But they already have that opportunity, since the driver has to process the "p-code", and there's no way around that. Processing HLSL finally gives them the opportunity to optimize, IMO. And this is, apart from readability, the second big reason to use a HLSL at all.

This whole "PS assembly" thing is IMO just a stop-gap solution until GPUs are no more hindered by low resource limits. With a HLSL->machine code interface, future hardware could take advantage of new instructions.
 
Xmas said:
I disagree. You're worried about IHV's being able to "cheat".

Partly. I'm more worried about IHVs being inconsistent. This creates headaches for consumers and game developers alike. It's bad enough as it is now, it'll only be made worse with even more decentralized control.

Again, I have no problems with multiple compiler back-ends. It's just that ultimately, Microsoft should ultimately "own" them. IHVs should work with MS to co-develop compilers, but MS needs to have the authority to force some level of consistency.
 
Joe DeFuria said:
Xmas said:
I disagree. You're worried about IHV's being able to "cheat".

Partly. I'm more worried about IHVs being inconsistent. This creates headaches for consumers and game developers alike. It's bad enough as it is now, it'll only be made worse with even more decentralized control.

Again, I have no problems with multiple compiler back-ends. It's just that ultimately, Microsoft should ultimately "own" them. IHVs should work with MS to co-develop compilers, but MS needs to have the authority to force some level of consistency.
What exactly do you mean by "being inconsistent"? That a compiler might output wrong code? That it might not be able to compile a shader? That it compiles a shader that shouldn't compile according to the specs?
 
Code:
HLSL Code 
    |
Runtime
    |    
IHV Compiler Plugin? -YES-> Can compile? -YES-> RET(OK)
    |                           |
    NO                          NO
    |                           |
    |---------<------------------
    |
use default compiler   
    |
Can compile? -YES-> ASM Code -> driver check -YES-> RET(OK)
    |                                |
    NO                               NO
    |                                |
    |-----<------------------<-------|
    |
RET(FAIL)

This is my "Best of two worlds" model I have posted some days ago in a other forum.
 
Joe DeFuria said:
Again, I have no problems with multiple compiler back-ends. It's just that ultimately, Microsoft should ultimately "own" them. IHVs should work with MS to co-develop compilers, but MS needs to have the authority to force some level of consistency.

I don't see the reason for Microsoft to "own" or co-develop the compiler. Maybe for a reference compiler that each IHV then could build from when doing their own compiler. That reference compiler could be used to get the consistency you're talking about also.

This is my "Best of two worlds" model I have posted some days ago in a other forum.

Looks good to me :)

Both good and bad. In the D3D market (lots and lots of games, consumer rather than professional audience) I'd rather the focus be put on consistency, stability, and less burdensome development for IHVs.

I would have thought that it would be much more importent with consistency and stability in the professional world then for gamers. Especially consistency.
 
Joe DeFuria said:
Simon F said:
But that's obviously not going to happen and so you get back the argument that the IHV should write their own compiler because they are the ones who knows their HW.

Exactly my point.

Every IHV writing their own compiler for their own best interests though is obviously not going to produce the same results. So we get back to the argument of one body writing the compiler profiles to maintain result consistency.
But Joe, every CPU maunfacture is going to have a "different" C compiler. Just as long as it compiles C and produces bug-free results, does it matter?

Perhaps if MS were to release the front-end of their HLSL compiler and allow the IHVs to fill in the final code-generation and peephole optimisation, would that be an acceptable compromise?
 
Joe DeFuria said:
Humus said:
The only thing needed is to ensure that the shader is mathematically and functionally equivalent and generates expected output.

And the only way to coming close to ensuring that, IMO, is to have one "independent" body controlling the compilers. (Independent meaning not an IHV).

All it takes it to check screenshots and see if they look as you expected. Compare between different IHV's, is there any visible difference? If not, then everything is fine. It doesn't matter how it produced the image, as long as it produced the right image. Visibly inspecting the image is the only way to ensure we got it right. If someone wants to cheat, it doesn't matter if MS owns the HLSL, they can still cheat on the asm->hardware level. Nothing you can do about it.
 
Following Joe's line of reasoning, we should also have one independent body controlling all drivers as well. And let's go a step forward, and have one independent body controlling all 3d chip designs as well.

The compiler's language should be standardized, but optimizers need to be decentralized and handled by IHVs or whoever does the best job. Trying to get Microsoft to maintain a crosscompiler for N different architectures is a bad idea.
 
DemoCoder said:
Following Joe's line of reasoning, we should also have one independent body controlling all drivers as well. And let's go a step forward, and have one independent body controlling all 3d chip designs as well.

Sigh...

Following your line of reasoning, we should just have infinite number of APIs. And let's go another step forward and demand that every chip require it's own OS as well.

:rolleyes:

There is obviously a balance to be stricken between "consitancy" and "independence." Why is it so hard to fathom that I believe the balance for the consumer market is a bit more toward consistancy than toward independence?
 
Sorry, not been around much for this thread.

Someone mentioned that this was the "OpenGL2.0 way of doing it", which is in fact not the case AFAIK - the OGL2.0 way of doing it puts the compiler in the hands of the IHV's and hence this means that from a single source of "The OpenGL Shader Language" (ugh) code assembly that is optimised specifically for the underlying hardware can be generated. IMO this is a good approach, but it does require each IHV to have a tight compiler that copiles to the right thing.

MS's initial approach appeared to start out as a "one size fits all" and that they would keep a single HLSL specification and a single HLSL compiler for everyone - the downside of this was that it was potentially not optimial for everyone, but it would have kept the standards high and control within MS.

Now, with the appraoch MS have taken it seems to be an issue of support - how can it be reconciled that if the hint is used for the FX series it might not run on an R300 class hardware? Especially since there is not runtime compilers.

I was initially under the understanding that the HLSL thats present previously was just being "rejigged" such that the code would be a little more optimal for the FX series, without impacting the performance of R300 parts too much.
 
Joe DeFuria said:
Following your line of reasoning, we should just have infinite number of APIs. And let's go another step forward and demand that every chip require it's own OS as well.

:rolleyes:

There is obviously a balance to be stricken between "consitancy" and "independence." Why is it so hard to fathom that I believe the balance for the consumer market is a bit more toward consistancy than toward independence?

Pehaps I've misunderstood what he intended, in which case correct me, but the duality you're talking about already exists:

Microsoft already defined DirectX as a "front-end" abstraction from the architecture. Yet, they acknowledged that there will be IHV dependant architectural differences, which is why you basically have a "translation" from DX into each IHV's "API" or "OS" (if you'd like) on a low-level - Which can be infinate in absolute amount if the marketplace would allow for it.

What Democoder stated, and I find highly intelligent, is that just as Microsoft has no "ownership" over each IHV's "back-end" drivers - they shouldn't have it over a compiler. Instead, you should let competition create the best architecture they can (or percieve to be) and let each IHV's deal with the low-level semantics. Microsoft should control the high-level language and exert control over the compiler via standards as they do already.

But, to give Microsoft - a 3rd party - as much control over what is ultimatly the sovereign/independent shareholders property as you advocate is wrongful. Especially since they now have a vested interest in specific parties (eg. XBox and nVidia; XBox2 and ATI) fate.
 
Pardon the OT (and this is a highly interesting debate) but:

What Democoder stated, and I find highly intelligent...

It's rather a rare case where I personally can see the opposite and no I'm not in a good mood today ;)
 
Joe DeFuria said:
Following your line of reasoning, we should just have infinite number of APIs. And let's go another step forward and demand that every chip require it's own OS as well.

Nope. The API is still the same (and the OS too). The backend compiler is different. With the DX way the shader is compiled into an assembly shader, and this shader just can't be optimal for all IHV's, so MS will have to provide many targets to get optimal shaders for everyone involved. And then we could just as well have let the IHV's do it, because we didn't save any work really. Not to mention that the assembly shaders need not be the best mapping in the hardware either. There can be hardware features that just aren't exposed through the assembly interface cause MS couldn't find wide enough support for it among vendors, which leaves useful features unutilized.
 
Back
Top