some GDC 2004 D3D papers are up at ati developer page

Technical debates and posts aren't required at B3D. However, lets take a moment to read my signature and recognize the importance of philosophers, such as demalion, in the technical arena, incuding the Beyond3D 3D Technology Forums ;) .

P.S. Took a wild guess at demalion's educational/intellectual background
 
My problem with demalion is that he's overly verbose, often unclear, he often doesn't know what he's talking about, so his oppponents often have to resort to delivering a tutorial on the concepts being debated, and he frequently resorts, in almost all threads, to accusing people of not answering his questions or answering his points because they have to chop up his huge posts. This is my last post on this subject with him due to the ever increasing length.


demalion said:
Well, first, I maintain my point about ps_2_b allowing branching to compile where it might not in ps_2_0, just looking at the higher limits, as I repeat that it is a compiler profile.

Only some branches can be inlined this way. Branches with early loop exit or discarding the current pixel based on texture lookup. Some can be made to work with egregious hacks that kill performance, but that's what GLSlang requires, MS won't do this.

Code:
   for(i = 0; i<5; i++)
   {
      a = func(i,a);
      if(a < 0) {
       a = 0; break;
       }
   }

Developers must be conscious when they write shaders if they are using constructs which are 3.0 only. Thus, a developer calling "break" or "discard" must be aware ahead of time he is targeting SM3.0. Similarly, when writing SM2.0 shaders, developers must be sensitive to the length of their shaders. For a developer to write a shader that won't compile to SM2.0 (e.g. requires SM2.b) , but isn't SM3.0, means he is consciously writing a shader for that model. Only a subset of SM3.0 shaders will compile on SM2.b, so the only scenario in which SM2.b gets most of its real usage is if the developer "targets" it when writing his HLSL code, which is in contrast to the idea that the compiler will take care of most of the work.

You seem too willing to artificially exclude hardware from implementing a shader. You still haven't clarified justification beyond your preference to punish IHVs for arbitrarily not satisfying you. I still don't see how it makes sense for consumers.

Having a common, standard, well designed API is good for consumers since it increases productivity of game developers and decreases the chances for bugs in games. Having such a common API means setting a least common denominator, which means some people's specialized hardware goes unused. This happens in every standard. That's why OpenGL extensions exist, and it's why many standards have mechanisms to expose proprietary features. But just because HW has an unexposed feature, does not mean that the that feature should be promoted into the core of the standard.

Which is why I have a tendency to point out that the DX HLSL compiler can change, using the mechanism of profiles. Which, strangely enough it did
...
How wonderful for you, some people might not even realize you continued to ignore my point!

Typical. If someone doesn't respond line by line and point by point to your massive posts, they are accused of ignoring you. This is getting to be a whining habit for you. Maybe if your posts were more succinct, people wouldn't have to chop them up. Anyway, I was the one who pointed out to YOU that the DX9 compiler could change, through the mechanism of PATCHES, in my long discussion of OGL vs D3D, dynamic vs static, compilation, which you joined after I started it. Don't take credit for something I invented son. :) Sadly, the DX compiler "changing" is exactly the problem I was criticizing.


DX 9 HLSL compiler updates. For developers. Sort of nicely coincides with game developers using more complex shaders because more capable hardware was released.

There are no HLSL compiler update only. MS releases SDK updates, not just compiler updates. It only "nicely coincides" if a game developer schedule coincides with new hardware, which is often not the case. But that doesn't address the issue of compiler bugs.


Eh? Different profiles enable different HLSL features to be expressed differently, causing your code to run on more hardware. If you had less profiles, more complex shader code would "break" (fail to compile) for more hardware, not less.

No, the different HLSL profiles require you to write DIFFERENT SOURCE CODE. The goal is to write the fewest number of versions of a shader and run on the most number of hardware, not to write the most number of shaders. If you had less profiles, you'd simply write fewer versions of the shader. One say, for 1.1, one for 2.0, and perhaps one for 3.0. There would be no shader code that would "break" because you'd always fall back to 2.0, and if neccessary, 1.1 The only way a PS2.b shader (e.g. one that takes advantaged of increased instruction slots) would work with no extra work from the developer is if he happened to have written a bunch of SM3.0 shaders that will compile cleanly on PS2.b.


That is the OpenGL philosophy and it has worked for a long time. Ironically enough, this doesn't seem to cause alot of developer headaches compared to DirectX.
Oh my, developers have been developing GLslang games forever, haven't they? Shaders are the same thing as everything else, and it isn't like OpenGL has ever had any headaches for implementing shaders before!

Sigh. If you weren't clueless you'd know I was talking about the OpenGL philosophy of having no optional features in the core API. GLslang simply continues this tradition. Carmack and Brian Hook both commented often on it.
Mark Kilgard (SGI) said:
An overriding goal of OpenGL is to allow the construction of portable and interoperable 3D graphics programs. For this reason, OpenGL's rendering functionality must be implemented in its entirety. This means all the complex 3D rendering functionality described later in the article can be used with any OpenGL implementation. Previous graphics standards often allowed subsetting; too often the result was programs that could not be expected to work on distinct implementations

Since it should be logically obvious that the same choices can be made in DX HLSL regarding PS 3.0, your problem seems to come down to developers having it made easier to do otherwise in HLSL and not immediately proving its inferiority.
Yes, my objection comes down to having a language which is consistent, clean, and portable at it's core. I'd rather have extensions be supported by the IHVs themselves instead of having the core become a spaghetti coded mess.



don't see GLslang as without issues in comparison. That's why I don't subscribe to "HLSL bad GLslang good".

If you want to play semantic games, fine. Label it "HLSL ok, GLSlang good" or "HLSL bad, GLSlang less bad". Label it opinion, label it proof. In all of your discussions, you are overly obsessed with the precision of language. If someone says "The sun is red", you'll take issue and say "The sun seems to be red. It's not proven to be exactly red." You seem to want everything to be equivocal, a perfect convert for General Semantics

I"ve enumerated the issues that I and many others have with DirectX's design. You can't scientifically prove they are bad, just like you can't prove that Citizen Kane is a better movie than Plan 9 From Outer Space. But a survey of opinion will tell you all the things that Plan 9 did wrong and Citizen Kane did right.

Yes, but the profiles remove artificial fragmentations, where you insist adding a profile adds a fragmentation by recognizing it. The fragmentation is from the hardware, same for GLslang, adding the profile just lets the compiler try to deal with it for HLSL.


The purpose of standards is to hide fragmentations, not expose them. Abstraction is meant to hide the underlying hardware from the developer. ISVs and IHVs get together in standards groups in order to agree with a common set of concepts which is interoperable between hardware. SM2.0 and 3.0 remove fragmentation by not exposing a bunch of features that real hardware may have, smoothing over differences. Your PS2.0 hardware may be able to handle more than 96 instructions, but PS_2_0 profile doesn't expose it. Therefore software written for PS_2_0 will run on a wider class of devices.

Adding ps_2_a and ps_2_b increases fragmentation and the amount of specialized code that developers must handle. Yes, it allows you to take advantage of some IHV specific features, but that's fragmentation by definition, and defeats some of the purpose of having a uniform API and shader model.
 
SPCMW said:
DemoCoder said:
My problem with demalion is that he's overly verbose,
Because you require me to repeat myself to maybe get you to recognize a simple point, as you ignore questions I ask that get to the heart of the matter when they are inconvenient for you. Maybe that's a reason behind longer posts.
often unclear,
I can't help but note that my comments only seem unclear after some of the editing you conduct and that your interpretations often bear little correspondence to what is stated before your editing, and have concluded the problem is primarily at your end. Maybe dealing with this problem results in longer posts.
he often doesn't know what he's talking about, so his oppponents often have to resort to delivering a tutorial on the concepts being debated,
If by this you mean I apply logical thinking, don't assume knowing something means my opinions are therefore facts, and actually listen to your side of the conversation. :oops: Maybe this results in longer posts.
So, when you said it would be 6 months after the NV40 and R420 were released before the compiler update would be available to developers, when you said dealing with hand-coded PS 1.4 and PS 1.1 was the same hassle as compiling to different profiles, and when you said a while ago that the HLSL system would require patches to games every month, you knew what you were talking about? I'm sure the length you added by maintaining these statements was a much better way to lengthen posts?

and he frequently resorts, in almost all threads, to accusing people of not answering his questions or answering his points
Actually, I don't know about "almost all threads", I just accuse people of that when I see it as being the case. And show why I think it. In pretty basic terms, too, what with simply adding the omitted freaking sentences and whatnot! :oops: Surely that isn't too "unclear" for you?
If that is in "almost all threads" (or probably more accurately "almost all threads with you"), that is unfortunate, I invite you to do your part to cut down on its occurence, and avoid lengthened posts when I have to point out why something you said was just plain useless.

because they have to chop up his huge posts.
All forms of "chopping up" are equal of course, and equally justified because the post is "huge". Nevermind how it got "huge".

Say, that sure was a useful preface and posturing that dealt with the issue at hand. It certainly didn't seek to dismiss my entire commentary out of hand by a collection of assertions based on your feelings, and very effectively facilitated actual conversation on the topic at hand.
Or not.
One of those.

This is my last post on this subject with him due to the ever increasing length.

demalion said:
Well, first, I maintain my point about ps_2_b allowing branching to compile where it might not in ps_2_0, just looking at the higher limits, as I repeat that it is a compiler profile.

Only some branches can be inlined this way. Branches with early loop exit or discarding the current pixel based on texture lookup. Some can be made to work with egregious hacks that kill performance, but that's what GLSlang requires, MS won't do this.

OK, and "I maintain my point about ps_2_b allowing branching to compile where it might not in ps_2_0, just looking at the higher limits". Do you recognize it?

Code:
   for(i = 0; i<5; i++)
   {
      a = func(i,a);
      if(a < 0) {
       a = 0; break;
       }
   }

Developers must be conscious when they write shaders if they are using constructs which are 3.0 only. Thus, a developer calling "break" or "discard" must be aware ahead of time he is targeting SM3.0.
You're concentrating here exclusively on the cases where the more advanced shaders fail to compile again, a topic I've addressed already.
Similarly, when writing SM2.0 shaders, developers must be sensitive to the length of their shaders.
Yes, I covered "shaders targetted at PS 2.0 base hardware", and "shaders targetted at beyond PS 2.0 hardware that would fail to compile on profiles between PS 2.0 and PS 3.0".
I also tried to make a point about a "shaders targetted at beyond PS 2.0 hardware that would not fail to compile on profiles between PS 2.0 and PS 3.0". Where is your discussion of this point?
For a developer to write a shader that won't compile to SM2.0 (e.g. requires SM2.b) , but isn't SM3.0, means he is consciously writing a shader for that model.
This is unadulterated sophistry....you simply talked around that point. Taking everything you said as a given, what about shaders that do not use constructs that can only be expressed in "SM3.0" and/or shaders that require more instructions to be expressed than can be done in base PS 2.0? You ignored that obvious logical case, and constructed a series of phrases based on that.
(See below for why this phrase is color coded)
Only a subset of SM3.0 shaders will compile on SM2.b,
Yes, a greater subset than will compile for PROFILE PS_2_0.
so the only scenario in which SM2.b gets most of its real usage is if the developer "targets" it when writing his HLSL code,
Huh? Is this double-speak "knowing what you are talking about"? Can't you know what you talk about and use something besides double-speak?
"the only scenario in which SM2.b gets most of its real usage"? You mean besides the other "only" scenario where it gets "some" of its "real usage" you seem to be trying to avoid!? Where you decided on the evaluation "most" and that "some" should just be talked around because it was my point? :oops:
You seem too willing to artificially exclude hardware from implementing a shader. You still haven't clarified justification beyond your preference to punish IHVs for arbitrarily not satisfying you. I still don't see how it makes sense for consumers.
Having a common, standard, well designed API is good for consumers since it increases productivity of game developers and decreases the chances for bugs in games.
I think both HLSL and GLslang can successfully deliver this, even though you dislike HLSL's methods.
Having such a common API means setting a least common denominator, which means some people's specialized hardware goes unused.
Reducing that "unuse" while facilitating developer ease in maintaining productivity benefits consumers more.
The compiler profile reduces that "unuse", and when dealing with my mentioning that you focus on ignoring any facet of ease of use for it and concentrate on saying it is the same things as hand coding to earlier shader models or learning a new language.

This happens in every standard. That's why OpenGL extensions exist, and it's why many standards have mechanisms to expose proprietary features.
And a profile is a way to help make that easy for the HLSL compiler, yes.
But just because HW has an unexposed feature, does not mean that the that feature should be promoted into the core of the standard.
Again, your argument comes down to your objection about the location of exposure. Unfortunately, the "support" you've provided hasn't dealt with justifying this, just been based on assuming your opinion on the matter is a given law. And my questions remain unanswered.

Which is why I have a tendency to point out that the DX HLSL compiler can change, using the mechanism of profiles. Which, strangely enough it did
...
How wonderful for you, some people might not even realize you continued to ignore my point!

Typical. If someone doesn't respond line by line and point by point to your massive posts, they are accused of ignoring you.

What word do you use to describe so deliberately and conciously misrepresenting someone else's text to facilitate personal attacks? Asinine just doesn't cut it.

Here I go recovering the actual conversation (and, oh so coincidentally adding length which you purely incidentally will complain about and use as an excuse to avoid conversation :rolleyes:):

SPCW said:
From two posts ago...
As it is now, they've bifuricated PS2.0.
No they haven't, they've added another compiler profile. You could argue that they have allowed more capabilities to be expressed using PS 2.0, though.
If PS2.0 was such a good intermediate language, why does the compiler have to do any special *hardware architecture dependant* tweaking at all?
That's a non sequitor. Expressing conditionals and branching literally, which was already in the PS 2.0 extended functionality specification, is a compiler tweak. The same intermediate language specification is being used, which doesn't exactly point towards its lack of suitability in the way you indicate.
Can't the driver do the extra work from the "LLSL"?
Sure, you could argue that it could. IMO, it would be a shame if the compiler never changed so that this was the only way to do the extra work. Which is why I have a tendency to point out that the DX HLSL compiler can change, using the mechanism of profiles. Which, strangely enough it did.
From last post...
Yes, it's also a shame that every 6 months, Microsoft has to release another patch to DX9 with 6 month old Microsoft engineered compiler optimizations (STILL BUGGED), more profiles, which in turn requires ISVs to recompile their games, relink them, develop patches, and distribute patches to end users. Wow, what a wonderfully elegant system. Boy Demalion, you're right. By golly, it works!
:LOL:
Gee, if you cram all the things I've refuted or addressed into one run-on sentence, you could probably complain about length if I picked it apart and illustrated how useless your sentence was, or just keep repeating it if I don't. How wonderful for you, some people might not even realize you continued to ignore my point!
The list of things I claim I've already refuted and/or addressed is, just perhaps, more than slightly pertinent to my complaint. The conversation in which I made the initial comment seems to rather clearly indicate my point(s). Let's see what your contribution to the conversation is.
This is getting to be a whining habit for you. Maybe if your posts were more succinct, people wouldn't have to chop them up.
It seems to me that you're actually trying to represent those excerpts as my actual point and then my complaining about you missing it. Without sarcasm, I do not at all think you are grossly mentally deficient, so I must assume you either think everyone else is or are trying to fool people into thinking the rest of the commentary didn't exist.

Anyway, I was the one who pointed out to YOU that the DX9 compiler could change, through the mechanism of PATCHES,

To what folly is your ego leading you now? The patch discussion I remember from you was the example you brought up in your "patch a month arguments". Asking you about your proposal didn't mean I didn't know the DX 9 compiler could change! :oops:
I had to be "taught" how the DX 9 compiler could not change back in 3dmark 03 discussions. MDolenc, was it? Perhaps it could even have been you? *shrug* What I find remarkable is that your view of the conversation is such that you are proposing you had to teach me that the DX9 compiler could change almost a year ago, and you yet again use that platform of accused ignorance to distract from the actual topic which is occuring now.
in my long discussion of OGL vs D3D, dynamic vs static, compilation, which you joined after I started it.
OK, you started the conversation I guess. Congrats?
Don't take credit for something I invented son. :) Sadly, the DX compiler "changing" is exactly the problem I was criticizing.
No kidding?

DX 9 HLSL compiler updates. For developers. Sort of nicely coincides with game developers using more complex shaders because more capable hardware was released.
There are no HLSL compiler update only.
But the HLSL compiler update is what we were talking about, which is why I mentioned it.
MS releases SDK updates, not just compiler updates.
They also release betas to developers. I know you know this, you just seem to have suddenly overlooked the pertinence to the issue.
It only "nicely coincides" if a game developer schedule coincides with new hardware, which is often not the case.
Then they wouldn't have had the new hardware to target with advanced shaders?
Oh, wait, they might have hardware before official release? Same with the SDK, right?
Oh, wait, they might release after the hardware was released? Well, couldn't they have the updated SDK already then?
Oh, wait, they might release before the hardware was released with untested advanced shaders? Oh, my, having to release a patch because something was untested...however will they handle the logistics of releasing a patch after a game was sold? It's not like they'd ever have any other reason to do such a thing.

But that doesn't address the issue of compiler bugs.

Nope, it doesn't. Just don't conveniently ignore the question when talking about something else.

Eh? Different profiles enable different HLSL features to be expressed differently, causing your code to run on more hardware. If you had less profiles, more complex shader code would "break" (fail to compile) for more hardware, not less.
No, the different HLSL profiles require you to write DIFFERENT SOURCE CODE.
By that sophistry above where you "established" that the only shaders that exist are shaders that require all PS 3.0 features to be expressed and shaders that require only base PS 2.0? By the practice of specifying the profile target and testing it? Ignoring where you finally admit my point below, after ignoring it in places like these where it would be inconvenient?
The goal is to write the fewest number of versions of a shader and run on the most number of hardware, not to write the most number of shaders. If you had less profiles, you'd simply write fewer versions of the shader. One say, for 1.1, one for 2.0, and perhaps one for 3.0.
You're apparently still tangled up in your "all shaders fit only into the niches I like" fallacy, for the moment.
Interesting pattern. Personal attack and dismissal of viewpoint, talk around a point, distract from the issue with another personal attack and assertion of self-superiority, continue conversation based on the ignored point not existing. Did I make that description up out of thin air, or did I actually just spend the post providing my basis for it? An answer that makes sense will depend on a willingness to comprehend the written word.
There would be no shader code that would "break" because you'd always fall back to 2.0, and if neccessary, 1.1 The only way a PS2.b shader (e.g. one that takes advantaged of increased instruction slots) would work with no extra work from the developer is if he happened to have written a bunch of SM3.0 shaders that will compile cleanly on PS2.b.

Oh my gawd!? Really? Thanks teach! Now start the paragraph over again. Maybe go back to my prior posts. Here, let me help:

demalion said:
I'm not seeing the point of your concern for this.
For shaders that can be expressed in PS 2.0 (for real-time usage), they are being used merely as hints that might allow speed up. Those PS 2.0 expressible shaders will be written anyways, because of PS 2.0 hardware with good performance. There doesn't need to be a ps_2_a and ps_2_b HLSL shader written...the compiler writes the ps_2_a and ps_2_b shaders, using suitable elements of PS 2.0.
Then there are shaders that might be beyond "PS 2.0", at least in real time and shader only expression. They might require a minimum of PS 3.0, in which case the ps_2_b profile won't impact them. But they might also be expressible with the set of capabilities associated with the ps_2_b profile, in which case the ps_2_b profile will allow them to be expressed. What's the problem?

Wow, the way you refused to discuss this then and finally deign to mention it now was really useful, teach. It is even more useful how you ignore this point before and after this in the same post when I bring it up.

More help:
demalion said:
No, PS 2.0, and "beyond PS 2.0" HLSL shaders...as I keep pointing out, ps_2_b is a compiler profile, not a completely new shader model. Some developers, with sufficient performance from soon to be released PS 3.0 parts (which I tend to expect), will make that essentially "PS 2.0" and "PS 3.0". By the nature of capabilities of PS 3.0 and PS 2.0 extensions, and the limitations of real-time shader implementation (for now), it looks to me like "PS 3.0" and "beyond PS 2.0" are quite similar over a wide range of applicability, and that the additional "beyond PS 2.0" profiles will serve to allow some such shaders to run better on more hardware.
Some developers might have an "in between" PS 2.0 and PS 3.0 gradation additional shader, but the ps_2_b profile's existence does not mandate this, it allows it more easily.
...
What is remarkable to me is how you propose this as something new, and what this indicates to me is that the only one you are effectively "listening" to is yourself.

That is the OpenGL philosophy and it has worked for a long time. Ironically enough, this doesn't seem to cause alot of developer headaches compared to DirectX.
Oh my, developers have been developing GLslang games forever, haven't they? Shaders are the same thing as everything else, and it isn't like OpenGL has ever had any headaches for implementing shaders before!
Sigh. If you weren't clueless you'd know I was talking about the OpenGL philosophy of having no optional features in the core API.
If you weren't ego-myopic, you wouldn't pretend to not know I was pointing out quoting the OpenGL philosophy doesn't answer the question of how OpenGL would actually deal with the problem, just proposes that it ignores it. Shaders weren't implemented using the core specification of OpenGL, yet the problem of shaders still existed and manifested for OpenGL. A compiler targetting a multitude of performance levels and architectures hasn't been in the core spec before. Yet you seriously propose quoting the OpenGL philosophy answered the question of how it would avoid dealing with performance and functionality issues? By your "developers will ignore real-time shaders for not fully functional hardware to be simpler than DX HLSL" slash "developers will just 'rework' the shader to deal with not fully functional hardware" dichotomy?

GLslang simply continues this tradition. Carmack and Brian Hook both commented often on it.
Mark Kilgard (SGI) said:
An overriding goal of OpenGL is to allow the construction of portable and interoperable 3D graphics programs. For this reason, OpenGL's rendering functionality must be implemented in its entirety. This means all the complex 3D rendering functionality described later in the article can be used with any OpenGL implementation. Previous graphics standards often allowed subsetting; too often the result was programs that could not be expected to work on distinct implementations

A quote from 1993 stating the philosophy, eh? Obviously, I was arguing with you about what the philosophy was, and I've now been shown I'm wrong. Congrats.

Since it should be logically obvious that the same choices can be made in DX HLSL regarding PS 3.0, your problem seems to come down to developers having it made easier to do otherwise in HLSL and not immediately proving its inferiority.
Yes, my objection comes down to having a language which is consistent, clean, and portable at it's core. I'd rather have extensions be supported by the IHVs themselves instead of having the core become a spaghetti coded mess.
I get that, and got it months ago. It still doesn't answer the points I've raised, and I wish you would recognize that where the points are actually raised instead of trying to pre-package that opinion.
don't see GLslang as without issues in comparison. That's why I don't subscribe to "HLSL bad GLslang good".
If you want to play semantic games, fine. Label it "HLSL ok, GLSlang good" or "HLSL bad, GLSlang less bad".
You actually said HLSL was bad and GLslang was good. Do I have to quote you?
You actually maintain your argument based on that being a given in an absolute sense (as I explained before this quote)...this wasn't the first time I said this, this is just a place absent of my justification for it (well, sort of, you demonstrate it just above).
Label it opinion, label it proof. In all of your discussions, you are overly obsessed with the precision of language.
Pointing out the difference between "opinion" and "proof" is not "overly obsessed with the precision of language", and believing so is a really significant obstacle to useful conversation. I believe the world is flat, why do I have to prove it? I think I'm right, don't concern me with proof that I'm not...my opinion is just as good as proof. :oops:
If someone says "The sun is red", you'll take issue and say "The sun seems to be red. It's not proven to be exactly red."
More like "The sun is gone" and "Open your eyes and go outside".
You seem to want everything to be equivocal, a perfect convert for General Semantics
Because I don't think opinion and proof are the same thing? Okey doke.
I"ve enumerated the issues that I and many others have with DirectX's design.
I could quote the entire last part of my last response, but you'd just forget it again I think.
You can't scientifically prove they are bad, just like you can't prove that Citizen Kane is a better movie than Plan 9 From Outer Space. But a survey of opinion will tell you all the things that Plan 9 did wrong and Citizen Kane did right.
But I can certainly prove that "I had cherry soda when watching Plan 9 and had root beer when watching Citizen Kane, and I like root beer better" is irrelevant criteria to evaluating them, which seems to me similar to how I get involved in many "conversations you started". Just because you can't prove something doesn't mean you can go ahead, treat it as proven, and ignore proof and/or logic to the contrary. It means that you should recognize that your opinion that you haven't proven just might not actually turn out to be correct.
Perhaps I'm asking too much?
Yes, but the profiles remove artificial fragmentations, where you insist adding a profile adds a fragmentation by recognizing it. The fragmentation is from the hardware, same for GLslang, adding the profile just lets the compiler try to deal with it for HLSL.
The purpose of standards is to hide fragmentations, not expose them.
Even when the fragmentation would still be there, the developer might have to deal with it anyway, and they could still ignore the fragmentation if they so chose?
Abstraction is meant to hide the underlying hardware from the developer.
Like HLSL? Still doesn't hide the issue of real-time performance though.
ISVs and IHVs get together in standards groups in order to agree with a common set of concepts which is interoperable between hardware. SM2.0 and 3.0 remove fragmentation by not exposing a bunch of features that real hardware may have, smoothing over differences.
No, HLSL removes fragmentation and smooths over differences. 2.0 and 3.0 expose the fragmentation in hardware capabilities for executing shaders. If you wanted to hide the fragmentation to your level of standards, you'd only consider 3.0 capable hardware as you proposed for GLslang. If you wanted to deal with that existing fragmentation, you'd test for the hardware, like you'd also do for GLslang.
Your PS2.0 hardware may be able to handle more than 96 instructions, but PS_2_0 profile doesn't expose it. Therefore software written for PS_2_0 will run on a wider class of devices.
Yep. And your non-PS3.0 hardware may be able to handle more than 96 instructions, and the PS_2_b profile exposing it would allow software written requiring more than PS_2_0 to run on a wider class of devices.
Adding ps_2_a and ps_2_b increases fragmentation and the amount of specialized code that developers must handle.
WARNING! Semantics Alert!
"Must" and "Can" mean different things. Used in this sentence, one recognizes the obvious logic I pointed out above, you temporarily agreed with, and promptly forgot because you felt like it. The other does not.
One allows recognition that developers are being enabled to deal with existing fragmentation and not being forced to deal with one manufactured by the existence of the profile. The other pre-packages an opinion that ignores that detail.
All by one word. That is a semantics game, and you are the one playing it.
Yes, it allows you to take advantage of some IHV specific features, but that's fragmentation by definition, and defeats some of the purpose of having a uniform API and shader model.
PS 2.0 and PS 3.0 are, by your logic, "fragmentation by definition", as is "reworking a shader" to work for different hardware in GLslang.
 
These verndor specific compiler targets (ps_2_a and ps_2_b) are microsoft's + IHV's way of dealing with the ps_2_x shader model and it's capability bits "hell"...

Not a nice and clean solution(nearly the same way as opengl with it's vendor specific extension...), but at least you can take advantage of new hardware, if you want.
If you don't have a good reason or very much time - simply stay with the 2_0 and 3_0 shader models.

Thomas
 
DemoCoder said:
Yes, and although the "B" designation seems suggest > PS_2_a, it's not the case. PS_2_B still doesn't support arbitrary swizzle, unlimited texture dependency, gradient instructions, or predication. PS_2_b is basically a step back from 2_a and adds *yet another* profile. PS_2_b = PS_2.0 with higher instruction limit and slightly more registers. Whoop de do.

You are mixing shader models and compiler targets. If ps_2_b where a shader model, you where right...

Thomas
 
Back
Top