some GDC 2004 D3D papers are up at ati developer page

DemoCoder said:
They should just label them PS_2_nv and PS_2_at and be done with it. PS_2_at is a subset of PS_2_nv. We're back at the situation with PS_1.3 vs PS_1.4

I don't get why you persist in this parallel, nor the associated sensationalist comments some people are making about profiles. These profiles are compiler targets...isn't it the compiler that deals with them? D3DxGet*ShaderProfile(), as listed in the presentation...what role does that play?

Perhaps your point relates to the functionality differences developers will be dealing with? If so, isn't that problem simply that of IHVs releasing hardware with different functionality?
PS 2.0 didn't disappear, nor does this profile seem associated with hardware that has problems running it ...developers can still target it as a baseline. If their HLSL shader uses branching, there is a new profile that allows more hardware to express it efficiently...why is that aspect ignored? If the profile doesn't support something key to a shader, won't the application fall back to the PS 2.0 baseline?

What is the issue here? What I see is the API allowing finer gradation in functionality exposure and facilitating dealing with an issue that exists for all real-time shader API usage: differing levels of functionality on multiple hardware.

The actual consistent issue for real-time shader usage seems to be differing performance levels of hardware and the hindrance it represents to general shader usage. Wasn't the problem with the ps_2_a profile the performance of the hardware running it and the steps required to deal with that, and not the additional functionality it allows or that the HLSL compiler supported it?
 
demalion said:
I don't get why you persist in this parallel, nor the associated sensationalist comments some people are making about profiles. These profiles are compiler targets...isn't it the compiler that deals with them?

No. The developer has to deal with them, and that's the problem. What if you write HLSL code that uses dynamic branching or gradient functions that simply *won't compile* under PS2.0 or PS_2_b? Not to mention having to compile multiple versions of a shader. The only way to be safe is to write PS2.0 code, but compile it with PS2_a or PS_2_b. That why, you'll be sure it'll run everywhere.

But if the profiles PS_2_a and PS_2_b are merely being used as hints to the optimizer, why add features that make them *incompatible* with each other. If PS_2_a and PS_2_b were spec for spec identical to PS2.0 with the exception of how the compiler generates code, it would be alot better.

As it is now, they've bifuricated PS2.0. If PS2.0 was such a good intermediate language, why does the compiler have to do any special *hardware architecture dependant* tweaking at all? Can't the driver do the extra work from the "LLSL"?

Perhaps your point relates to the functionality differences developers will be dealing with? If so, isn't that problem simply that of IHVs releasing hardware with different functionality?

Yes, but why should MS release profiles everytime some IHV adds a feature? This is exactly the problem we had with PS1.1-1.4. Creeping incremental featuritis. Either, let the IHVs expose custom features through extensions, or design a compiler that is future proof like OpenGL2.0 has done. OpenGL2.0 needs no profiles. You're either OGL2.0 compliant, or you aren't. # registers, # slots, all irrelevent.


If their HLSL shader uses branching, there is a new profile that allows more hardware to express it efficiently...why is that aspect ignored? If the profile doesn't support something key to a shader, won't the application fall back to the PS 2.0 baseline?

Only if you write two different versions of your shader, which sucks. Now you'll have developers writing shaders for PS 2.0, PS_2_a, PS_2_b, and PS3.0?


Wasn't the problem with the ps_2_a profile the performance of the hardware running it and the steps required to deal with that, and not the additional functionality it allows or that the HLSL compiler supported it?

PS_2_a added both features, and told the compiler to do special optimizations. A crappy design decision, taken only because Microsoft's compiler isn't part of the IHV's driver. Instead, Microsoft has to go back every DX9 update (9.0a, 9.0b, 9.0c) tweaking their monolithic compiler to teach each new revision about new HW. This is frankly, stupid.

What's gonna happen when PVR Series 5 and Intel DX9 come out? PS_2_c and PS_2_d? Are we gonna need PS3_a, PS_3_b, PS_3_c when R420/NV40/R500/NV50 come out?

You'll never be convinced of this, I know. My opinion: Microsoft's HLSL design and approach to extending the core is flawed and OpenGL2.0 has the better model. Take it or leave it.
 
Why is it that the AA paper shows two quality levels for 2x, one for 4x and one for 6x?
 
DemoCoder, the HLSL compiler is not part of the runtime. It is a part of the SDK. The runtime knows only 2.0, 2.x, 3.0. 2_A and 2_B are only compilerprofiles.

Sure glslang have its pros but there are cons too. I want the best of both world but IMHO this will not happen soon.
 
The Baron said:
ps_2_we_suck_too_much_to_just_make_it_3 (ps_2_a) and ps_2_wtf_we_dont_need_this (ps_2_b). I think that would be a lot classier.

HLSL targets supported by only one IHV suck. I say we ask developers not to use them.

LOL, I was getting bored here at work. :) thanks, I need that. Now I can go back to debuging VB.NET xpath anomalies. :)
 
DemoCoder said:
demalion said:
I don't get why you persist in this parallel, nor the associated sensationalist comments some people are making about profiles. These profiles are compiler targets...isn't it the compiler that deals with them?

No. The developer has to deal with them, and that's the problem. What if you write HLSL code that uses dynamic branching or gradient functions that simply *won't compile* under PS2.0 or PS_2_b?

Then use your "PS 2.0" or "base" HLSL shader which you'll be writing for: PS 2.0 hardware out there...I covered this.

Not to mention having to compile multiple versions of a shader. The only way to be safe is to write PS2.0 code, but compile it with PS2_a or PS_2_b. That why, you'll be sure it'll run everywhere.

Yes, you can compile PS 2.0 code for PS_2_b, but that is the same PS 2.0 HLSL code you'd be using for the PS 2_0 profile and the hardware it represents. Which you'd be doing for PS 2.0 hardware, as I mentioned.

I'm talking about allowing more complex shaders to be expressed on more hardware...what if the HLSL shader only uses branching OR gradient functions, and it is the one that WILL compile under PS_2_B? That just allows another case of shader functionality to be used on more hardware.

But if the profiles PS_2_a and PS_2_b are merely being used as hints to the optimizer, why add features that make them *incompatible* with each other. If PS_2_a and PS_2_b were spec for spec identical to PS2.0 with the exception of how the compiler generates code, it would be alot better.

I'm not seeing the point of your concern for this.
For shaders that can be expressed in PS 2.0 (for real-time usage), they are being used merely as hints that might allow speed up. Those PS 2.0 expressible shaders will be written anyways, because of PS 2.0 hardware with good performance. There doesn't need to be a ps_2_a and ps_2_b HLSL shader written...the compiler writes the ps_2_a and ps_2_b shaders, using suitable elements of PS 2.0.
Then there are shaders that might be beyond "PS 2.0", at least in real time and shader only expression. They might require a minimum of PS 3.0, in which case the ps_2_b profile won't impact them. But they might also be expressible with the set of capabilities associated with the ps_2_b profile, in which case the ps_2_b profile will allow them to be expressed. What's the problem?

As it is now, they've bifuricated PS2.0.
No they haven't, they've added another compiler profile. You could argue that they have allowed more capabilities to be expressed using PS 2.0, though.

If PS2.0 was such a good intermediate language, why does the compiler have to do any special *hardware architecture dependant* tweaking at all?
That's a non sequitor. Expressing conditionals and branching literally, which was already in the PS 2.0 extended functionality specification, is a compiler tweak. The same intermediate language specification is being used, which doesn't exactly point towards its lack of suitability in the way you indicate.

Can't the driver do the extra work from the "LLSL"?

Sure, you could argue that it could. IMO, it would be a shame if the compiler never changed so that this was the only way to do the extra work. Which is why I have a tendency to point out that the DX HLSL compiler can change, using the mechanism of profiles. Which, strangely enough it did.

Perhaps your point relates to the functionality differences developers will be dealing with? If so, isn't that problem simply that of IHVs releasing hardware with different functionality?

Yes, but why should MS release profiles everytime some IHV adds a feature?

This isn't just because "some IHV added a feature", it is because an IHV released a new architecture. It seems to be suitably described as "improving the compiler", which it seems desirable for MS to do. Are you going to complain about a PS 3.0 profile?

This is exactly the problem we had with PS1.1-1.4.
Yes, in that some hardware had more functionality than others. That's a reality of progress, as I just went over.
No, in that those were pixel shader versions, and that these are compiler profiles.
Creeping incremental featuritis.
Managed by a compiler, but yes incremental. This is just as silly as blaming NV40 for having PS 3.0 functionality, though, which would amount to exactly the same complaint...all your complaints seem to come back to hardware being different.
Either, let the IHVs expose custom features through extensions, or design a compiler that is future proof like OpenGL2.0 has done. OpenGL2.0 needs no profiles. You're either OGL2.0 compliant, or you aren't.
Really?
So, how will developers using OpenGL 2.0 handle texture reads in the vertex shader and hardware that can't do it? How about shaders that use branching and are too slow on hardware that doesn't support it? How about hardware without gradients? You seem to be proposing they write once and just let software implementation kick in. If it is fast enough, fine. Will it be? How about when it isn't? Do you perhaps mean they just skip shaders for hardware that doesn't support everything at full speed to keep things simple? That's not exactly an improvement over DX HLSL for quality gaming.

This is why I keep mentioning "real-time" usage...it seems clear to me, for example, that GLslang is heading in a better direction for GPU/CPU convergence processing, if it delivers on its advantages before (and if) DX adapts to be more suited to the task. For real-time usage (such as games), however, what they'll have to do to keep customers happy for real-time usage is: write different shaders, just as for the "bifuricated" DX shader system.

except that # registers, # slots, all irrelevent.
Not irrelevant, hidden. But it doesn't hide the central issue of having to write different shaders for different levels of functionality/performance in hardware and delivering the shaders in real-time usage.

If their HLSL shader uses branching, there is a new profile that allows more hardware to express it efficiently...why is that aspect ignored? If the profile doesn't support something key to a shader, won't the application fall back to the PS 2.0 baseline?

Only if you write two different versions of your shader, which sucks. Now you'll have developers writing shaders for PS 2.0, PS_2_a, PS_2_b, and PS3.0?

No, PS 2.0, and "beyond PS 2.0" HLSL shaders...as I keep pointing out, ps_2_b is a compiler profile, not a completely new shader model. Some developers, with sufficient performance from soon to be released PS 3.0 parts (which I tend to expect), will make that essentially "PS 2.0" and "PS 3.0". By the nature of capabilities of PS 3.0 and PS 2.0 extensions, and the limitations of real-time shader implementation (for now), it looks to me like "PS 3.0" and "beyond PS 2.0" are quite similar over a wide range of applicability, and that the additional "beyond PS 2.0" profiles will serve to allow some such shaders to run better on more hardware.
Some developers might have an "in between" PS 2.0 and PS 3.0 gradation additional shader, but the ps_2_b profile's existence does not mandate this, it allows it more easily.
What I'm missing here is how you think GLslang development will escape this dilemna of functionality and fitting shaders to real-time usage, unless you think "PS 2.0" hardware will simply be excluded from Glslang implementation for gaming and real-time usage?

Wasn't the problem with the ps_2_a profile the performance of the hardware running it and the steps required to deal with that, and not the additional functionality it allows or that the HLSL compiler supported it?

PS_2_a added both features, and told the compiler to do special optimizations. A crappy design decision, taken only because Microsoft's compiler isn't part of the IHV's driver. Instead, Microsoft has to go back every DX9 update (9.0a, 9.0b, 9.0c) tweaking their monolithic compiler to teach each new revision about new HW. This is frankly, stupid.

This establishes little more than a repetition of your opinion, and doesn't address the question posed. Let's try again.

What's gonna happen when PVR Series 5 and Intel DX9 come out? PS_2_c and PS_2_d? Are we gonna need PS3_a, PS_3_b, PS_3_c when R420/NV40/R500/NV50 come out?

Well, ignoring the exaggeration for the moment, so what if it does? They are compiler profiles, not mandates for developers to write new shaders for each. Developers can have PS 2.0 and PS 3.0 level targets and write according HLSL shaders. Their lives are still complicated by the reality of performance gradations of actual hardware...the profiles for the compiler serve as a tool towards resolving that reality, not a mandate to specifically target every new profile added. As I just got through saying, ps_2_a being treated in such a way was due to the hardware that ran it, was it not?

You'll never be convinced of this, I know.

To share your opinion? Not with your method of supporting it, no.

My opinion: Microsoft's HLSL design and approach to extending the core is flawed and OpenGL2.0 has the better model. Take it or leave it.
It always seems strange to me how you phrase things as if any viewpoint that doesn't agree with yours is defined by the absolutes you set, in these discussions seemingly that "Microsoft's HLSL design is flawless and OpenGL 2.0's model is worse". Stop pigeon-holing, you end up arguing with something you made up.
 
demalion said:
I'm talking about allowing more complex shaders to be expressed on more hardware...what if the HLSL shader only uses branching OR gradient functions, and it is the one that WILL compile under PS_2_B? That just allows another case of shader functionality to be used on more hardware.

PS_2_B doesn't have branching or gradients, only PS_2_A. Even you're getting confused over the naming. I just don't think there should be another shader model between 2.0 and 3.0. Microsoft should not allow vendors to expose non-full 3.0 hardware like this.

But if the profiles PS_2_a and PS_2_b are merely being used as hints to the optimizer, why add features that make them *incompatible* with each other. If PS_2_a and PS_2_b were spec for spec identical to PS2.0 with the exception of how the compiler generates code, it would be alot better.

Those PS 2.0 expressible shaders will be written anyways, because of PS 2.0 hardware with good performance. There doesn't need to be a ps_2_a and ps_2_b HLSL shader written...the compiler writes the ps_2_a and ps_2_b shaders, using suitable elements of PS 2.0.

Good performance would not require this profile system. It only requires the compiler to be part of the driver and invoked at runtime. As it stands now, there have been 3 patches to FXC compiler. Had there been alot of DX9 games on the market, everyone who used HLSL would have had to issue 3 patches to take advantage of this.


Then there are shaders that might be beyond "PS 2.0", at least in real time and shader only expression. They might require a minimum of PS 3.0, in which case the ps_2_b profile won't impact them. But they might also be expressible with the set of capabilities associated with the ps_2_b profile, in which case the ps_2_b profile will allow them to be expressed. What's the problem?

Well, first of all, PS_2_b doesn't add any PS3.0 features. But that's beside the point. The problem is, rather than creating just a new compiler profile, it's creating a new ShaderModel. I don't think ShaderModel 2.a/b should exist. Now we have SM1.x, SM2.0, SM2.a, SM2.b, and SM3.0. It's a waste, because I'll bet in the vast majority of cases, no one will use them. PS_2_a is especially going to die a quick death due to NV40, and 2_b is virtually useless.



As it is now, they've bifuricated PS2.0.
No they haven't, they've added another compiler profile. You could argue that they have allowed more capabilities to be expressed using PS 2.0, though.

Creating a profile which can create code which is not backwards compatible with PS2.0 hardware is a new shader model. Just like PS1.3 code won't run on PS1.1 HW and is not PS1.4 capable, SM2.a code won't run on SM2.0 HW and is not 3.0 either.

Sure, you could argue that it could. IMO, it would be a shame if the compiler never changed so that this was the only way to do the extra work. Which is why I have a tendency to point out that the DX HLSL compiler can change, using the mechanism of profiles. Which, strangely enough it did.

Yes, it's also a shame that every 6 months, Microsoft has to release another patch to DX9 with 6 month old Microsoft engineered compiler optimizations (STILL BUGGED), more profiles, which in turn requires ISVs to recompile their games, relink them, develop patches, and distribute patches to end users. Wow, what a wonderfully elegant system. Boy Demalion, you're right. By golly, it works!


This isn't just because "some IHV added a feature", it is because an IHV released a new architecture. It seems to be suitably described as "improving the compiler", which it seems desirable for MS to do. Are you going to complain about a PS 3.0 profile?

So you think Microsoft should release new DX9 updates for every new GPU architecture that hits the market? Of course, MS's compiler still doesn't generate optimal code, but they'll get ther eventually?

The reason why I don't complain about PS3.0 is because PS2.0 and PS3.0 were defined 2 years ago and they are sufficiently different to justify their existence, as well as developers clearly only having to write 2 different shaders, not 4 different shaders. It's good to have 1 or 2 standards which you must learn. Having to learn N different changing target languages is bad.

The point of a high level language is to be able to write portable code and increase productivity. Microsoft's compiler doesn't work like that, because different profiles switch off HLSL features and certain language constructs causing your code to break.

This is exactly the problem we had with PS1.1-1.4.
Yes, in that some hardware had more functionality than others. That's a reality of progress, as I just went over.
[/quote]

Which Microsoft decided simplify by creating 2 shader models. Microsoft themselves claimed that what they did with PS1.0-1.4 was a mistake. They then went on to break their own design decisions.

Creeping incremental featuritis.
Managed by a compiler, but yes incremental.

No, managed by the developer. If you think you can simply recompile HLSL code that uses advanced features on any profile and leave it to the compiler, you are blissfully naive.

So, how will developers using OpenGL 2.0 handle texture reads in the vertex shader and hardware that can't do it? How about shaders that use branching and are too slow on hardware that doesn't support it? How about hardware without gradients? You seem to be proposing they write once and just let software implementation kick in.

That is the OpenGL philosophy and it has worked for a long time. Ironically enough, this doesn't seem to cause alot of developer headaches compared to DirectX. In reality, all OpenGL2.0 hardware will support these features in HW. ARB simply won't let NVidia or ATI get away with shipping a half-featured card like MS did with NV3x. Both 3dLabs nextgen and NV40 support all OpenGL2.0 features.

OpenGL2.0 declares that it is the responsibility of the IHV/driver to make sure *any* OpenGL2.0 legal program *runs*. It doesn't guarantee real-time performance, but neither does DirectX9. Targeting the PS_2_a (NVidia) profile simply will not guarantee realtime performance of PS2.0 shaders.

In fact, targeting PS2.0 won't guarantee real time performance, since some shaders can be too slow on some hardware. Developers have to do performance validation and testing anyway. The idea that if you stick to a DX9 shader model/profile, and if it compiles, you won't have any performance problems on all supported hardware is bogus.


That's not exactly an improvement over DX HLSL for quality gaming.
Lol. Fanboys for MS too I guess. So by implication, OpenGL isn't "quality gaming"?

except that # registers, # slots, all irrelevent.
Not irrelevant, hidden. But it doesn't hide the central issue of having to write different shaders for different levels of functionality/performance in hardware and delivering the shaders in real-time usage.

In practice it will, since OGL2.0 support will require OGL2.0 HW for the most part. Nvidia and 3dLabs are already there. Anyone who wants to try and run OGL2.0 on DX8 or DX9 PS2.0 HW will have their hands full.


What I'm missing here is how you think GLslang development will escape this dilemna of functionality and fitting shaders to real-time usage, unless you think "PS 2.0" hardware will simply be excluded from Glslang implementation for gaming and real-time usage?

First of all, DirectX doesn't solve this dilemma. Secondly, you've got it exactly right. When OGL2.0 is finally certified, there will be OGL2.0 capable HW. Developers will write shaders. They will then test. If they use a feature which is not efficient on a certain architecture, they'll find it when they test, and go back and rework it. (yes, you can query HW)

To share your opinion? Not with your method of supporting it, no.

I support my arguments with experience, not ignorance. I've actually written compilers and have alot of experience with language design, you apparently don't. (atleast as evidenced by your ignorance in past discussions as to how compilers actually work)


You think that having the ability to write 4 different HLSL fragments which can compile to four different profiles (2.0, 2.a, 2.b, 3.0) in other to take advantage of hidden features of 2.a/2.b HW is a good idea. I think it's a bad idea. I think having a universal standard is better. If vendors want features exposed, DirectX should support a method to allow IHVs to ship extensions, rather than Microsoft shoving everything into the core.

OpenGL supports such a method through the OPTION mechanism of the shading languages.

This comes down to aesthetics. I think it's bad design. You think it's good. Some developers will even agree with you. I favor simplicity over complexity in the core. I favor abstraction over rigidity. I do not favor MS exposing each and every quirk of an architecture. There is no way I can possibly prove to you MS's decision is bad. After all, programmers are still arguing over OO vs functional vs structured programming. We are still arguing static vs dynamic typing, or top-down programming vs bottom-up "extreme programming" At best I could wave my hands and point to the fact that everytime a uniform standard has been created, it led to a huge amount of progress in the market, be it HTML/XML, or MPEG, or J2EE. There is only so much fragmentation that can be tolerated by developers.

OpenGL leaves exposing non-standard architectural features to extensions that IHVs can ship and evangelize developers themselves (plus provide technical support). DirectX shoves thousands of queryable capability bits into the *core* of the API, and is now a jumble of shader models and profiles, which all has to be managed by Microsoft. It's a double negative. If new HW comes out (like NV40 or R420) with new features, IHV's can't expose it immediately on DX. They have to wait 6 months to a year for a DX SDK update. Whereas, with OpenGL they can immediately expose it for experimentation.

Likewise, the core API doesn't get cluttered up by all of these nonstandard or experimental features. Whereas, 6 months later when MS's new SDK patch arrives, MS will have shoved all the new extensions into the core SDK. And you wonder why so many coders have such a low opinion of Microsoft software?
 
Yep, another long one. I color coded the non-repetition

DemoCoder said:
demalion said:
I'm talking about allowing more complex shaders to be expressed on more hardware...what if the HLSL shader only uses branching OR gradient functions, and it is the one that WILL compile under PS_2_B? That just allows another case of shader functionality to be used on more hardware.
PS_2_B doesn't have branching or gradients, only PS_2_A. Even you're getting confused over the naming.

Well, first, I maintain my point about ps_2_b allowing branching to compile where it might not in ps_2_0, just looking at the higher limits, as I repeat that it is a compiler profile...the ability to express more instructions using the PS 2.0 model was already there with the capability bit extensions. Second, I was aware of the lack of gradients, which why I was focusing on OR, but I was indeed confused by what ATI's GDC presentation discussed (the discussion led me to think ps_2_b would emit if/else/endif and that the hardware used predication to express it) and not paying enough attention to the simple chart listing that clearly specified what ps_2_b was associated with :rolleyes:.

My thought on gradients, which is why I was focusing on them too much, is that I was surprised at indication that ATI wasn't exposing them for future hardware as it seems on the surface a simple thing to implement. Looking at that chart more clearly, I'm struck by what someone indicated earlier, in that ps_2_b looks like it could even suite the 9800 (given some talk about issues with F-Buffer exposure under Direct X that might hint that it is a solution to that problem..the question still being how many temporary registers the 9800 could support). Aside from the issue of HLSL compilers, (given the rest of the discussion, what I'm now pondering as an "if") ps_2_b represents the R420 then ATI doesn't seem to have improved anything as far as shader instruction functionality.


I just don't think there should be another shader model between 2.0 and 3.0. Microsoft should not allow vendors to expose non-full 3.0 hardware like this.

You seem too willing to artificially exclude hardware from implementing a shader. You still haven't clarified justification beyond your preference to punish IHVs for arbitrarily not satisfying you. I still don't see how it makes sense for consumers.

DemoCoder said:
But if the profiles PS_2_a and PS_2_b are merely being used as hints to the optimizer, why add features that make them *incompatible* with each other. If PS_2_a and PS_2_b were spec for spec identical to PS2.0 with the exception of how the compiler generates code, it would be alot better.

demalion said:
Those PS 2.0 expressible shaders will be written anyways, because of PS 2.0 hardware with good performance. There doesn't need to be a ps_2_a and ps_2_b HLSL shader written...the compiler writes the ps_2_a and ps_2_b shaders, using suitable elements of PS 2.0.

Good performance would not require this profile system. It only requires the compiler to be part of the driver and invoked at runtime.

No, it doesn't "require" one exclusively over the other, they are both different solutions to the same problem.

As it stands now, there have been 3 patches to FXC compiler. Had there been alot of DX9 games on the market, everyone who used HLSL would have had to issue 3 patches to take advantage of this.

First, 3 patches in, what, a year?, is not horrendous, but at least this exaggeration is much better than, for example, a hypothetical patch a month discussion. Second, your assertion happens to assume each and every HLSL compiler update impacted each and every game. Third, that is if the IHVs didn't have a LLSL compiler and the shaders in those games would benefit from the profile characteristics...looking at reality, it seems to me both that IHVs have provided LLSL compilers and the HLSL compiler has improved...so it is both true that patching to a new compiler mightprovide benefit and that so might driver updates.

Then there are shaders that might be beyond "PS 2.0", at least in real time and shader only expression. They might require a minimum of PS 3.0, in which case the ps_2_b profile won't impact them. But they might also be expressible with the set of capabilities associated with the ps_2_b profile, in which case the ps_2_b profile will allow them to be expressed. What's the problem?

Well, first of all, PS_2_b doesn't add any PS3.0 features.

Well, we've had a discussion about algebraic expression of conditionals, and the slides I'm thinking of mentioned them as well. A larger instruction limit would allow more such to be expressed, and thus a more complex shader.

But that's beside the point. The problem is, rather than creating just a new compiler profile, it's creating a new ShaderModel. I don't think ShaderModel 2.a/b should exist. Now we have SM1.x, SM2.0, SM2.a, SM2.b, and SM3.0. It's a waste, because I'll bet in the vast majority of cases, no one will use them. PS_2_a is especially going to die a quick death due to NV40, and 2_b is virtually useless.

Did you illustrate why compiler profiles would need to be ignored when you just need to use a compiler aware of the new one, or are you just repeating your opinion and selectively dropping bits of my commentary "on accident"?

As it is now, they've bifuricated PS2.0.
No they haven't, they've added another compiler profile. You could argue that they have allowed more capabilities to be expressed using PS 2.0, though.

Creating a profile which can create code which is not backwards compatible with PS2.0 hardware is a new shader model. Just like PS1.3 code won't run on PS1.1 HW and is not PS1.4 capable, SM2.a code won't run on SM2.0 HW and is not 3.0 either.

And the code is backwards compatible where it does not exceed the capabilities of the prior "shader model". Nevermind that the PS 2.0 shader model was not as limited as the PS 1.3 shader model and doesn't fit your parallel...obviously discussing models that were introduced to expose hardware functionality before a compiler dealt with it and recalling those hassles of "backwards compatability" is what matters.
How about if I make up the positive sounding phrase "upwards compatability for hardware" to describe things since my point is just going to be ignored? Shall we have a "negative sounding" versus "positive sounding" phrase contest, or could we deal with the distinction between a compiler profile and the more limited "shader models" you mentioned? I've already brought them up.

Sure, you could argue that it could. IMO, it would be a shame if the compiler never changed so that this was the only way to do the extra work. Which is why I have a tendency to point out that the DX HLSL compiler can change, using the mechanism of profiles. Which, strangely enough it did.

Yes, it's also a shame that every 6 months, Microsoft has to release another patch to DX9 with 6 month old Microsoft engineered compiler optimizations (STILL BUGGED), more profiles, which in turn requires ISVs to recompile their games, relink them, develop patches, and distribute patches to end users. Wow, what a wonderfully elegant system. Boy Demalion, you're right. By golly, it works!

:LOL:
Gee, if you cram all the things I've refuted or addressed into one run-on sentence, you could probably complain about length if I picked it apart and illustrated how useless your sentence was, or just keep repeating it if I don't. How wonderful for you, some people might not even realize you continued to ignore my point!

This isn't just because "some IHV added a feature", it is because an IHV released a new architecture. It seems to be suitably described as "improving the compiler", which it seems desirable for MS to do. Are you going to complain about a PS 3.0 profile?

So you think Microsoft should release new DX9 updates for every new GPU architecture that hits the market?

DX 9 HLSL compiler updates. For developers. Sort of nicely coincides with game developers using more complex shaders because more capable hardware was released. But let's ignore that correlation and why it goes counter to your characterization and focus on this as an issue of HLSL being flawless or not.

Of course, MS's compiler still doesn't generate optimal code, but they'll get ther eventually?

Hey, watch me strut my feathers, I'm a pigeon in a hole!

The reason why I don't complain about PS3.0 is because PS2.0 and PS3.0 were defined 2 years ago...

PS 2.0 was defined with capability bits for further instruction count, wasn't it?

...and they are sufficiently different to justify their existence,
in your opinion, that doesn't naturally follow from the first part.

as well as developers clearly only having to write 2 different shaders, not 4 different shaders.

Didn't I answer this question?

It's good to have 1 or 2 standards which you must learn. Having to learn N different changing target languages is bad.
They're not "compiler profiles", mentioning the compiler shows how something is dealing with them for you. They mandate whole new shaders from developers because they are really new "shader models" (nevermind if it uses the same shader model) that developers have to target. No, no, they are new "standards" and "target languages" developers have to learn. :oops:
The point of a high level language is to be able to write portable code and increase productivity. Microsoft's compiler doesn't work like that, because different profiles switch off HLSL features and certain language constructs causing your code to break.

Eh? Different profiles enable different HLSL features to be expressed differently, causing your code to run on more hardware. If you had less profiles, more complex shader code would "break" (fail to compile) for more hardware, not less.

This is exactly the problem we had with PS1.1-1.4.
Yes, in that some hardware had more functionality than others. That's a reality of progress, as I just went over.

Which Microsoft decided simplify by creating 2 shader models. Microsoft themselves claimed that what they did with PS1.0-1.4 was a mistake. They then went on to break their own design decisions.

Oh my, I'm sure this was an accident, but you seem to have inadvertently hidden my point by omitting some text.

Yes, in that some hardware had more functionality than others. That's a reality of progress, as I just went over.
No, in that those were pixel shader versions, and that these are compiler profiles.

Silly me, I think the latter part was my point, and answered your response before you made it.

Creeping incremental featuritis.
Managed by a compiler, but yes incremental.

No, managed by the developer. If you think you can simply recompile HLSL code that uses advanced features on any profile and leave it to the compiler, you are blissfully naive.

I think you can write your HLSL shader targetted at base PS 2.0 hardware and another shader targetted beyond those limits, and that some of the latter can indeed compile for the profiles "between PS 2.0 and PS 3.0". And perform real-time on the hardware running it, if the hardware is fast enoguh.
I also think that other shaders targetted beyond PS 2.0 will likely fail to compile.
Obviously, since I'm not with you with only focusing on those that will fail and therefore evaluating any intermediate profiles as useless, I'm naive and think every "PS 3.0" shader will compile for the magical ps_2_b and ps_2_a profiles. That's almost as silly as pretending all shaders magically divide into the groups "PS 2.0", with base PS 2.0 instruction limitations, and "PS 3.0", with a minimum of 512 instructions and using every feature exposed by PS 3.0, and that any PS 3.0 card released will run them in real -time!

So, how will developers using OpenGL 2.0 handle texture reads in the vertex shader and hardware that can't do it? How about shaders that use branching and are too slow on hardware that doesn't support it? How about hardware without gradients? You seem to be proposing they write once and just let software implementation kick in.

That is the OpenGL philosophy and it has worked for a long time. Ironically enough, this doesn't seem to cause alot of developer headaches compared to DirectX.

Oh my, developers have been developing GLslang games forever, haven't they? Shaders are the same thing as everything else, and it isn't like OpenGL has ever had any headaches for implementing shaders before!
You know, I think it would be asinine to argue that past headaches with implementing shaders through OpenGL dictated how GLslang would manage, probably because the "logic" is identical to what you're proposing as an answer to the points I'm raising.


In reality, all OpenGL2.0 hardware will support these features in HW. ARB simply won't let NVidia or ATI get away with shipping a half-featured card like MS did with NV3x. Both 3dLabs nextgen and NV40 support all OpenGL2.0 features.

Why did you drop my question "Do you perhaps mean they just skip shaders for hardware that doesn't support everything at full speed to keep things simple?" when you seem to be answering "yes?". Put your viewpoint in an inconvenient context for this point in the conversation?

OpenGL2.0 declares that it is the responsibility of the IHV/driver to make sure *any* OpenGL2.0 legal program *runs*. It doesn't guarantee real-time performance, but neither does DirectX9. Targeting the PS_2_a (NVidia) profile simply will not guarantee realtime performance of PS2.0 shaders.

Perhaps that's because the hardware lacks the performance capability, not because the profile exists? Did you keep ignoring my questions and comments about the ps_2_a profile and the hardware running it so you could make this statement, or was it an accident?

In fact, targeting PS2.0 won't guarantee real time performance, since some shaders can be too slow on some hardware. Developers have to do performance validation and testing anyway.

Didn't I mention this already? Perhaps it would be useful to discuss the issue without ignoring the points I raised.

The idea that if you stick to a DX9 shader model/profile, and if it compiles, you won't have any performance problems on all supported hardware is bogus.

OTOH, if you have a fast "PS 2.0" part that can execute code using expanded limits in a "beyond PS 2.0" compiler profile, you can execute real-time shaders that would have simply failed without that profile. One of these statemetns was my point, the other was not. It is interesting which you chose to address.

That's not exactly an improvement over DX HLSL for quality gaming.
Lol. Fanboys for MS too I guess. So by implication, OpenGL isn't "quality gaming"?

It must have been fun dropping the actual question that goes with that statement and getting your jollies by calling me a f@nboy. "Do you perhaps mean they just skip shaders for hardware that doesn't support everything at full speed to keep things simple? That's not exactly an improvement over DX HLSL for quality gaming." Gee, is that an "MS f@nboy" ( :oops: how soon we forget!) "implication" about OpenGL, or an implication about your idea of skipping shader implementation for more hardware?

except that # registers, # slots, all irrelevent.
Not irrelevant, hidden. But it doesn't hide the central issue of having to write different shaders for different levels of functionality/performance in hardware and delivering the shaders in real-time usage.

In practice it will, since OGL2.0 support will require OGL2.0 HW for the most part. Nvidia and 3dLabs are already there. Anyone who wants to try and run OGL2.0 on DX8 or DX9 PS2.0 HW will have their hands full.

(Your vision of) GLslang not addressing a bunch of hardware, and requiring developers to either ignore any such hardware (great for consumers!) or fall back to extensions (which actually fit every attack you've manufactured about profiles!), doesn't seem so simple. Since it should be logically obvious that the same choices can be made in DX HLSL regarding PS 3.0, your problem seems to come down to developers having it made easier to do otherwise in HLSL and not immediately proving its inferiority.

What I'm missing here is how you think GLslang development will escape this dilemna of functionality and fitting shaders to real-time usage, unless you think "PS 2.0" hardware will simply be excluded from Glslang implementation for gaming and real-time usage?

First of all, DirectX doesn't solve this dilemma.
If you ignore any point that indicates otherwise, sure. Or you treat "solve" as "solve completely so developers never have to consider it" and ignore consistency when applying the question to GLslang.
Secondly, you've got it exactly right. When OGL2.0 is finally certified, there will be OGL2.0 capable HW. Developers will write shaders. They will then test. If they use a feature which is not efficient on a certain architecture, they'll find it when they test, and go back and rework it. (yes, you can query HW)
This still leaves the problem of real-time shaders for the hardware you propose HLSL should ignore, except with the details hidden. Your argument depends on ignoring hardware that can't implement the full "PS 3.0" featureset, and now at the last minute you're saying the developer should indeed create a new shader to deal with the problem with one specific hardware architecture? AFAICS, you're trying to say developers don't have to worry about shaders running on different hardware using GLslang, while avoiding dealing with the undesirable side-effects of ignoring such issues by saying they'll "just go back and rework it" and ignoring how that is similar to what you'd be doing in HLSL.

To share your opinion? Not with your method of supporting it, no.
I support my arguments with experience, not ignorance.

If the support you tend to provide is evidence of your experience, I'd argue there isn't much distinction in your case. (We could hold a real productive conversation exchanging these jibs, yes?)
In actuality, I think they are instead evidence of an attitude that only your viewpoint is worthwhile, independent of details, that you seem to have some need to consistently maintain.
I've actually written compilers and have alot of experience with language design, you apparently don't. (atleast as evidenced by your ignorance in past discussions as to how compilers actually work)
You mean where I asked questions, listened to answers, provided my thoughts, and said things like that "now that was a useful discussion"? Why, naturally, you should bring that up, after all how could I have a valid point about anything that even touches on the matter, and how could any point you introduce be anything but valid?

You think that having the ability to write 4 different HLSL fragments which can compile to four different profiles (2.0, 2.a, 2.b, 3.0) in other to take advantage of hidden features of 2.a/2.b HW is a good idea. I think it's a bad idea.
What in your extensive compiler experience indicates to you that compilers always have to have developers write new high level code to compile to a different target? If it doesn't, I invite you to reconsider your characterization of my viewpoint if such a behavior is possible for you.

I think having a universal standard is better. If vendors want features exposed, DirectX should support a method to allow IHVs to ship extensions, rather than Microsoft shoving everything into the core.

I'm aware of this opinion, and I've expressed the flaws I've seen in your support of it, as I've expressed the flaws I've seen in your statements here. This is because of the flaws in how you maintain that first statement is already proven and how things you say seem contrary to observation and logic, not because I hold the diametrically opposed viewpoint you persist in railing against.

OpenGL supports such a method through the OPTION mechanism of the shading languages.

And is it really too much to ask that you hold a conversation that doesn't bypass the question of whether the system is better for developers and consumers than profiles by making distorted statements about profiles?

This comes down to aesthetics. I think it's bad design. You think it's good.
Sure (to the last two sentences). You treat bad and good as absolutes, though, where I do not.
Some developers will even agree with you.
Oh? Do they know things about game design that you have not dealt with? Is the answer no, or could it be that your assertion about "supporting your argument with experience instead of ignorance" might ring a bit hollow as an alternative to...actually discussing the issue?
I favor simplicity over complexity in the core. I favor abstraction over rigidity. I do not favor MS exposing each and every quirk of an architecture.
By arguing that hand coding to PS 1.4 and PS 1.1/1.3 separately is the same thing as compiling to different ps_2_* profiles, and dismissing any commentary about the distinction? That's not logical at all.
Perhaps that's where the problem lies.


There is no way I can possibly prove to you MS's decision is bad.
Not as an absolute, no. That it has issues, yes, you can, if it is even necessary to provie to me...I recognize that it has issues already, I just don't see GLslang as without issues in comparison. That's why I don't subscribe to "HLSL bad GLslang good".

After all, programmers are still arguing over OO vs functional vs structured programming. We are still arguing static vs dynamic typing, or top-down programming vs bottom-up "extreme programming" At best I could wave my hands and point to the fact that everytime a uniform standard has been created, it led to a huge amount of progress in the market, be it HTML/XML, or MPEG, or J2EE. There is only so much fragmentation that can be tolerated by developers.
...

Yes, but the profiles remove artificial fragmentations, where you insist adding a profile adds a fragmentation by recognizing it. The fragmentation is from the hardware, same for GLslang, adding the profile just lets the compiler try to deal with it for HLSL.
AFAICS, it is your trying to spoon feed me your opinion that has you end up "hand waving" about things like PS 3.0 only being exposed in the HLSL 6 months after PS 3.0 hardware, and how adding a profile and allowing shaders to be expressed with less than the full PS 3.0 featureset is a negative. If MS takes 6 months to go from PS 2.0 compilers to PS 3.0 compilers, how are IHVs implementing it bug free and immediately? When did GLslang establish that track record?
 
digitalwanderer said:
The Baron said:
Re: past three posts.

Jesus Fucking Christ.

Seconded! :oops:
Yeah, all discussions with Demalion degenerate into massive posts that end simply when one person refuses to spend the next half hour or more replying, repeating the same arguments because one or both sides are too self-biased to agree with anything else.
 
Ostsol said:
Yeah, all discussions with Demalion degenerate into massive posts that end simply when one person refuses to spend the next half hour or more replying, repeating the same arguments because one or both sides are too self-biased to agree with anything else.

I don't know about that. All I know is I just ain't got the attention span to make it thru posts that long! :(
 
Ostsol said:
digitalwanderer said:
The Baron said:
Re: past three posts.

Jesus Fucking Christ.

Seconded! :oops:
Yeah, all discussions with Demalion degenerate into massive posts that end simply when one person refuses to spend the next half hour or more replying, repeating the same arguments because one or both sides are too self-biased to agree with anything else.

What a wonderful and meaningful contribution. After all, it is short, it must be true. If it is a personal attack on me based on a bunch of put-downs that made you feel good, that's just fine...after all, you decided I deserved it, right?
 
digitalwanderer said:
Ostsol said:
Yeah, all discussions with Demalion degenerate into massive posts that end simply when one person refuses to spend the next half hour or more replying, repeating the same arguments because one or both sides are too self-biased to agree with anything else.

I don't know about that. All I know is I just ain't got the attention span to make it thru posts that long! :(

I color coded that last message for the short of attention span. Please take advantage of my effort to give you a hint as to where to look for new discussion points. You might also return to my initial comment and consider whether those points have actually been addressed or simply talked over.
 
ninelven said:
Any chance of getting the ignore button installed anytime soon?
Yeah. The last thing we need here is for detailed technical debate to be encouraged. :rolleyes:

There is an ignore option. It's called the mouse wheel.

Folks, if the length and/or technicality of posts bother you that much, there's always the console forum.
 
demalion said:
I color coded that last message for the short of attention span. Please take advantage of my effort to give you a hint as to where to look for new discussion points. You might also return to my initial comment and consider whether those points have actually been addressed or simply talked over.

I hadn't noticed you color coded it, I must have been scrolling past it too fast. Sorry, thanks for taking the time to include the prompts for those of us with the attention span of an over-ripe grapefruit. :)

But on your second point of going and considering a previous long post, NO WAY! :oops:

WAY too much work and effort, I want sound bites and bulleted points! ;)
 
demalion said:
Ostsol said:
digitalwanderer said:
The Baron said:
Re: past three posts.

Jesus Fucking Christ.

Seconded! :oops:
Yeah, all discussions with Demalion degenerate into massive posts that end simply when one person refuses to spend the next half hour or more replying, repeating the same arguments because one or both sides are too self-biased to agree with anything else.

What a wonderful and meaningful contribution. After all, it is short, it must be true. If it is a personal attack on me based on a bunch of put-downs that made you feel good, that's just fine...after all, you decided I deserved it, right?
*yawns*
 
If you don't understand what is being posted, don't read it. There seems to be a influx of 'other' forum regulars that like to drag down conversations here, save your drivel for home base.

Demalions posts are usually long but is a long time veteran member and has good content...grow up.
 
Back
Top