Should IHVs write their own MS HLSL compilers?

bloodbob

Trollipop
Veteran
This idea came to me when I found out that a certain company was patching PS byte code directly for there product to expose hardware features not in PS2.0. So why shouldn't each IHV just write their on HLSL compilers and support a target for each chip since there is still a generic fall back HLSL which would be PS2.0.

Now I know there are alot of arguments about it basicly killing the standard so here is my next question.

Is there anyway we could stop this other then try to get the developers to not use the IHV tools? because we already have atleast one ISV doing this for their own purposes.
 
You can't patch PS byte code with unsupported codes and pass throw DirectX verification layer. If by "certain company" you mean Valve - then they are doing different thing in HL2.
 
Re: Should IHVs write their on MS HLSL compilers?

bloodbob said:
This idea came to me when I found out that a certain company was patching PS byte code directly for there product to expose hardware features not in PS2.0. So why shouldn't each IHV just write their on HLSL compilers and support a target for each chip since there is still a generic fall back HLSL which would be PS2.0.

Well they have to write an at least that good compiler to start then is the only point when they could start to add new features.
nVidia tried to create such a compiler and failed.
ATI deems such attempt as futile so they won't even try.

Of course if HLSL was open source... ;)

Is there anyway we could stop this other then try to get the developers to not use the IHV tools? because we already have atleast one ISV doing this for their own purposes.

Well you could always make a d3d9.dll wrapper with a strict checking, but somehow I don't see that as a solution to anything.

Actually there are quite interesting possibilities in this patching business.

Did you know for example that the PS 2.0 bytecode has support for FP16 precision registers that was never exposed even in the assembler?
 
Hyp-X said:
Clootie said:
they are doing different thing in HL2.
What makes you say that?
Source code?
Well they have to write an at least that good compiler to start then is the only point when they could start to add new features.
nVidia tried to create such a compiler and failed.
ATI deems such attempt as futile so they won't even try.
How about Cat 3.10 and GLSL?

Well you could always make a d3d9.dll wrapper with a strict checking, but somehow I don't see that as a solution to anything.
DirectX is already doing strict checking!

Did you know for example that the PS 2.0 bytecode has support for FP16 precision registers that was never exposed even in the assembler?
Why do you supposed what? Do you have any evidence of a backdoor? I'm not.
 
Clootie said:
Hyp-X said:
Clootie said:
they are doing different thing in HL2.
What makes you say that?
Source code?

That is the same thing that makes me say they are patching the precompiled pixel shader code.

Next who says the verification layer is worth squat. Also I never said that they should use non valid ps2.0 code but you could optimise register usage per chip as well as instruction order also companies could use PS3.0 code and limit the functionality to an arbitry subset that they choose.
 
bloodbob said:
That is the same thing that makes me say they are patching the precompiled pixel shader code.
Answered privately.

...also companies could use PS3.0 code and limit the functionality to an arbitry subset that they choose.
They can't. Either you declate to DirectX runtime that you are compliant to PS_3_0 or not. And if driver do declare maximum PS version as PS_2_0 - DirectX runtime will not allow setting PS_3_0 shader.
 
well if what you say is correct truely correct anyone using PP hint on textures on ATI cards aren't gonna get what they are expecting and ATI is violating the DX9 spec.

Regardless to what they are patching they are stilling patching for spefic hardware.

They can't. Either you declate to DirectX runtime that you are compliant to PS_3_0 or not. And if driver do declare maximum PS version as PS_2_0 - DirectX runtime will not allow setting PS_3_0 shader.

Declare it PS_3_0 compatible it probably will fail to render correctly but so does the volaris chips on PS_2_0.

If you don't want that way then you could still write your own compiler that still compiles code that has the optimal instruction order register use ect.
 
bloodbob said:
well if what you say is correct truely correct anyone using PP hint on textures on ATI cards aren't gonna get what they are expecting and ATI is violating the DX9 spec.
=>
from DirectX SDK said:
The optional partial precision modifier [_pp] applies to dependent reads. This is because partial precision affects arithmetic operations involving the texture coordinate register. It will not affect the precision of texture address instructions because it does not affect the texture coordinate iterators.
Not to mention that this hack is applied only for ATI HW.

Regardless to what they are patching they are stilling patching for spefic hardware.
Yep. But just look at this as to NVIDIA DepthShadows hack for NV2x (setting depth buffer as texture) - it's correct usage of API using it's "grey" areas.

They can't. Either you declate to DirectX runtime that you are compliant to PS_3_0 or not. And if driver do declare maximum PS version as PS_2_0 - DirectX runtime will not allow setting PS_3_0 shader.
Declare it PS_3_0 compatible it probably will fail to render correctly but so does the volaris chips on PS_2_0.
So XGI should fix their drivers or fail WHQL testing.

If you don't want that way then you could still write your own compiler that still compiles code that has the optimal instruction order register use ect.
??? What do you mean?
 
Clootie said:
Declare it PS_3_0 compatible it probably will fail to render correctly but so does the volaris chips on PS_2_0.
So XGI should fix their drivers or fail WHQL testing.

Who said it needs fixing?
What you claim goes against what Hyp-X (who tested the card) said, I quote :

Hyp-X said:
Well I started up our internal build of our upcoming game on Volari and everything worked.
Including the PS2.0 shadow buffering high-precision buffers.
Until now this only worked on R3xx cards.
So I'm impressed.
 
xGL said:
Who said it needs fixing?
What you claim goes against what Hyp-X (who tested the card) said, I quote :
I have not said that, probably I've misunderstood bloodbob post: "...will fail to render correctly but so does the volaris chips on PS_2_0" (this sentence is looking strange even for me non-native english-speaking man).

xGL said:
Hmm... I'll try that later on my Vol ...
Seems you have Volary, and I do not, so test it more thoughtfully. From what I know at least some shaders I've send to testing work incorrectly (they work but results are far from expected). This has been shown with ShaderMark2 too.
 
xGL said:
What you claim goes against what Hyp-X (who tested the card) said, I quote :

I didn't say I tested everything.
I only tested a few the things on the card before taking it out, because the IQ of the card (or it's better to say the lack of it) and the fan noise was very annoying...

One of the reasons I said what I said because I'm positively surprised that the card has high precision buffers something nVidia failed to deliver.

But I think this is quite off-topic in this thread...
 
jpaana said:
One actual use could be patching PS 2.0 shader to use centroid sampling which is only in PS 3.0 spec.
Yep, for ATI R3xx chips in PS_2_0: 'dcl_pp t0' == 'dcl_centroid t0'
 
Clootie said:
jpaana said:
One actual use could be patching PS 2.0 shader to use centroid sampling which is only in PS 3.0 spec.
Yep, for ATI R3xx chips in PS_2_0: 'dcl_pp t0' == 'dcl_centroid t0'

But that can not work for all shaders - what happens to those shaders which do not want centroid sampling but are using _pp, assuming R3xx will ignore it, and nv will use it? There also has to be the dreaded app detection built into the R3xx drivers, no?
 
In this case app detection to enable centroid sampling would be completely valid, since it's the only way to get AA to work in HL2 without causing artifacts.

Not all app detection is bad.
 
Eolirin said:
In this case app detection to enable centroid sampling would be completely valid, since it's the only way to get AA to work in HL2 without causing artifacts.

Not all app detection is bad.

Yes. but with all the illegal detection going on, how do you think the community is going to react?
Most would just read that ATI were using app detection and thus put them in the same boat as Nvidia (and xgi?)..

ATI would have to be very careful to explain exactly what was detected, and for what reason...
Otherwise it'd be just like Unwinders antidetect script which disables all app detections wether legal or not without any way of knowing what has been disabled...

It'll be interesting to see how they go about this...
 
AFAIK ATI/Valve have more or less implemented a suggestion that Colorless made - I think they have created a special texture type and ATI recognises these textures for centroid sampling - so it doesn't detect the application, it just looks out for these textures.
 
Back
Top