Sharkfood, you just don't get it. You are so fixated on trying to pin me down as pro-Nvidia that you simply don't read.
I didn't say pro-NVIDIA. And no, it's the concept and not the IHV in particular. And I would simply state you should read my posts in their entirety as you have now dreamt up your own content from them.
The concept is, no you can't state a standard deviation for "Image Quality" but you can clearly illustrate A > B... so accordingly, in these cases, benchmarks cannot then be displayed as A = B.
We have seen it with SS graphs vs MS, we have seen it with trilinear vs bilinear. We have seen it with broken zbuffer with geometry popping in and out of a scene vs clear/perfect. We have seen it with complete details missing (rocket trails, fences, trees, skylines, etc.). In all these cases, we have seen A = B stipulation with the focus being the ability to benchmark these kinds of stark variances in order to provide higher numbers for a particular IHV.
In all cases, the fallback excuse for such radically perverse behavior has been discounted by the age old "IQ is subjective" nonsense, which doesn't even come close to applying in these kinds of comparisons.
For crying out loud, Anand posted benchmarks of the 9700 Pro with 16x anisotropic filtering versus a Ti4600 at 4x anisotropic filtering.
In the past, I still cant believe that sites reviewing the 8500 and GF3 benchmarked trilinear method A anisotropic filtering vs bilinear method B anisotropic filtering. Running 2x, 4x or 8x on both products was stark and dramatic. One had extremely aggressive LOD with starkly visible seams, the other had no such issue. You can count the number of websites that made this a point on one hand versus the 100's of sources that just benchmarked them all together as A = B.
On the reverse, we get hundreds of people writing nasty stuff about the NV30 and it isn't even out yet
This is where someone would get the theory of your "unfairness" to a particular IHV.
It's not hundreds of people writing nasty stuff or dreaming up fictional or factless stories. When most every reviewing source clearly states the thing is loud, sounds like a buzzsaw and that NVIDIA reps joked by saying "Well, our gamers wont hear the
loud fan since they will be wearing headphones" (kind of thing, paraphrased)- these are concerns based on evidence. People concerned with size and noise aren't making baseless chatter.
People saying that all IQ is subjective and that there aren't any bounds that should be drawn are living in the Utopian world where all IHV's don't cut corners or market features to a substantially lesser degree than others. Especially when a singular approach is a direct attempt at dramatically reducing quality for the sake of improving performance. This kind of approach, and the "IQ is subjective" umbrella only encourages such behavior.
We had the same problems 3-4 years ago with surreal level of default LOD bias, back when this had a larger impact on benchmarks. It was known that benchmarks were going to be run blindly and that the fancy graph peaks were all that mattered, regardless of final rendered image quality. Anyone trying to bring to the forefront that a texture on benchmark A only had 18-20 isolated, distinct colored pixels versus 270 isolated and distinct colored pixels on benchmark B were always thrown into that "IQ is subjective" umbrella when there was no doubt that the details of the texture were dramatically truncated in benchmark A, and the "artists making the game" didn't consume the time putting those details in the texture so LOD bias would lose 90% of them.
That's the point.