Shaders 1.4 in 3dmark03? Fair or unfair?

Himself

Regular
I figured this would be an inevitable discussion, so I thought I'd start the thread. :)

My thoughts:

As a synthetic benchmark test, they can measure whatever they like, but I think that use of 1.4 is pretty much going to be nil in the real world so it's not exactly representative.

On the flip side, cards with 2.0 and beyond will make the point moot and I'm sure the scores of older cards will be so pathetic that nobody will really care. :)
 
I don't see how there is any unfair about it. If anything it would be "unfair" to not use them where appropriate... any intelligent game/benchmark/app that uses Dx8 shaders would be smart to use PS1.4 where they can for performance since cards that only run 1.1 can still run them. Given that all Dx9 cards will support PS1.4 that makes as we move foward an ever increasing pool of cards that will support PS1.4 for shaders that only need Dx8 capabilities.

The amount of work saved with PS1.4 over PS1.1 with the reduced passes is striking.
 
PS 1.4 allows appropriate cards (the 8500 and 9000 only currently, AFAIK, but perhaps other chips might target it this year) to offer better performance for some of the techniques that 2.0 supporting cards perform very well. Since those techniques can, and, we know by example of Doom 3, will be offered in games, this seems reasonable to me.
What I've wondered elsewhere is whether this should have been done by allowing HLSL compilation to the specific target...this could be a better representation depending on how much, if at all, DX9.x HLSL compilation improvements pan out for future cards, even if it isn't significant right now. As an option, at the least, I think this would facilitate addressing any issues of unfairness that might crop up (I certainly don't think it is unfair at the moment).
 
Why would supporting the superior upgraded Pixel Shader Version be wrong ?? Technology progresses, so must applications and PS 1.4 is much closer to PS 2.0 then PS 1.1 is to 2.0
 
Well, it doesn't look like the GF4 is slower than the R8500 in game test #2 and #3 sooooo........ it kinda remember me of the Carmack remarks about those cards and Doom III.
 
Well, the Geforce 4 is just plain faster than the Radeon 8500 anyway, so it should come as no surprise that it performs better than the latter despite the different shader versions.
 
demalion said:
P
What I've wondered elsewhere is whether this should have been done by allowing HLSL compilation to the specific target...this could be a better representation depending on how much, if at all, DX9.x HLSL compilation improvements pan out for future cards, even if it isn't significant right now. As an option, at the least, I think this would facilitate addressing any issues of unfairness that might crop up (I certainly don't think it is unfair at the moment).

I tend to follow the same line of thinking, if you could choose to use 1.4 or not, it would give reviewers something more practical to work with. As it is, it's one card's best vs another card's best in an ideal world where each card's best is always used.

It could be that all newer games will toss in a 1.4 version of shaders or they might decide that 1.1 is good enough for all older cards that are not 2.0, one less code path to have to debug and support.

Again though, it's kinda moot, with scores of 1000 or something and fps in the 2's and 3's, who really cares? :)
 
I think the vast majority of developers will still stick to the lowest common denominator, at least for a while. As soon as high level shader languages become more commonly used, though, this will certainly change as they allow the developer to write a single set of shader code, leaving the compiler to convert the HLSL to the appropriate assembler versions.
 
Pixel shaders 1.4 are very good for fallback from ps_2_0 shaders, since most ps_2_0 shaders won't be that complicated.
It seams that one of the main usages of ps_2_0 shaders will be "Doom 3" style lighting with math doing the normalization. Pixel shaders 1.4 are nice to fallback this, since you still have one pass, but you'll have to use normalization cubemaps. Things get a bit complicated with ps_1_1, since you can't do all your lighting in one pass, which basically means you have to send geometry two or three times trough the pipeline...
But games will do this (Doom 3), so why shouldn't 3D Mark do it?
You can't say "one cards best vs. other cards best" since in this case they won't be doing the same thing anymore...
 
About this HLSL thing:
You can't expect that you'll write one shader targeted for say GeForce FX and it will somehow work on GeForce 3 (glslang maybe but I'm not OpenGL guy, DX 9 HLSL not)... You have to modify some stuff (see my previous post).
 
Shouldn't cubemaps solely be used as a fallback technique in DX 9 tests, with the availibility of fully procedural lighting in PS 2.0 (phong, etc. without half-vector approximation, explained by Carmack in his R300/NV30 commentary)?
 
You want per pixel lighting (second and third test in 3D Mark) you'll pretty much have to use normalization cubemaps. You can get rid of those cubemaps and do some math in ps_2_0. But it's still the same thing.
Carmack was talking about computing half vector in pixel shader instead of interpolating it across triangles.
 
See, unless i am just not understanding this correctly...

The big issue for Nvidia and why they think PS 1.4 is unfair has nothing to do with the 8500/9000. It has Everything to do with the entire Radeon 9500/9700 line getting a big boost in performance from it. They can all execute PS 1.4 also. This is partly why Radeon 9500pro cards score equal to the GF4 Ti 4600 in all benchmark games but GF4's get completely Destroyed by the 9500's in 3dmark03. which is why Nvidia pointed out that games are a better benchmark for performance. Of course this is completley misleading. Becuase the truth is the 9500'd are DX9 classed cards. And are more funtional, and will be more functional in future games than the GF4 series.

This is the crux of Nvidia modivation. At least that I sumarizered late this afternoon.
 
Doomtrooper said:
Why would supporting the superior upgraded Pixel Shader Version be wrong ?? Technology progresses, so must applications and PS 1.4 is much closer to PS 2.0 then PS 1.1 is to 2.0
I was under the impression that 1.4 isn't a "superior, upgraded version", it was just how ATi did pixel shading compared to Nvidia, and it has its strengths and weaknesses vs PS 1.3?
 
Hellbinder[CE]: I don't think NVidia really has a problem with ps_1_4... Read my reply in "Nvidia against 3D Mark 2003 thread".
 
I was under the impression that 1.4 isn't a "superior, upgraded version", it was just how ATi did pixel shading compared to Nvidia, and it has its strengths and weaknesses vs PS 1.3?

I haven't heard a single person bring up one thing that PS 1.4 doesn't do as well or better than PS 1.3.

As for 3Dmark having a choice of which shader standard, that'd be the worst thing ever. Then you'd have people posting scores but not necessarily knowing what's what. Besides, which the idea of benchmarking is to run a controlled experiement preferrably with one variable that you're measuring, everything else should be isolated or constant. Introducing the choice wouldn't be good experiment design.
 
Glonk said:
Doomtrooper said:
Why would supporting the superior upgraded Pixel Shader Version be wrong ?? Technology progresses, so must applications and PS 1.4 is much closer to PS 2.0 then PS 1.1 is to 2.0
I was under the impression that 1.4 isn't a "superior, upgraded version", it was just how ATi did pixel shading compared to Nvidia, and it has its strengths and weaknesses vs PS 1.3?

Well, so nVidia would like you to think, but it's false in every sense. I don't see any weakness of 1.4 compared to any older versions.
 
Glonk said:
Doomtrooper said:
Why would supporting the superior upgraded Pixel Shader Version be wrong ?? Technology progresses, so must applications and PS 1.4 is much closer to PS 2.0 then PS 1.1 is to 2.0
I was under the impression that 1.4 isn't a "superior, upgraded version", it was just how ATi did pixel shading compared to Nvidia, and it has its strengths and weaknesses vs PS 1.3?

You should definitely take time to review the specs! :?
 
Saem said:
As for 3Dmark having a choice of which shader standard, that'd be the worst thing ever. Then you'd have people posting scores but not necessarily knowing what's what.

I don't really care about what people post. I don't care about overall scores. I care about measuring performance. For me, finding out how a Radeon 8500 or 9000 (or 9500 or 9700 or GeForceFX for this matter) performs when using pixel shaders 1.1 is useful.

Besides, which the idea of benchmarking is to run a controlled experiement preferrably with one variable that you're measuring, everything else should be isolated or constant. Introducing the choice wouldn't be good experiment design.

Seems exactly the reasoning to have the choice. That way you could isolate the effect of 1.1 vs. 1.4 on the same card, or 1.1 vs. 1.1 on different cards. Much more useful than having the benchmark deciding what to use.
 
ET said:
Much more useful than having the benchmark deciding what to use.

Why is that? It is simply reflective of reality. Based on my understanding, Doom 3, for example, will automatically choose which rendering path to use (although I do expect that we will be able to tweak some config files to force it into a different code path).
 
Back
Top