I already provided an example of the converse: EMBM in 3DMark. It wasn't supported in earlier GeForce chipsets.
But it was supported by a multitude of others, not just the one or singular.
Also, when they originally included anti-aliasing support in the benchmark, Nvidia's AA performance sucked rocks and only made NVidia's AA look very bad compared to 3dfx.
I must have missed that 3dfx product. From day one, a GTS with it's add-on 6.XX drivers using AA was still clobbering even a V5 in final scores due to the HW T&L advantage- even moreso on the processors of the time.
Once you come to a conclusion, you can interpret any slight evidence to enforce that worldview, and ignore any contrary data. The conspiracy nuts have been doing it for years, whether it's UFOs, FreeMasons, Bilderbergers, grand-oil-conspiracy, etc
Or alternatively, you can turn a blind eye towards any visible or extensive accumulation of evidence and simply discount *because* it conflicts with one's world view.
If someone is set upon one set of principles- they will simply try to rationalize away any evidence of the contrary. Same thing but in reverse.
If MadOnion had a coding bug in 3dmark that made it run poorly on brand X, it will be interpreted as being biased against brand X, even if they never intended it to be so, and was merely an unintentional coding error.
There isn't an example of the contrary so this theory stands to reason. If there was a code bug introduced to make the product run poorly on Brand Y.. well, that would never happen if Brand Y was the designated platform from which under it was designed from square one.
If MadOnion is lazy fixing a bug, or refuses to waste time writing specialized code for a card with a small market share (e.g. Kyro), it will be interpreted as bias.
Understandable hypothesis, but again not consistent with reality. V1.1 of 3DMark2000 came out almost immediately once it was determined a race state occurred under one specific chipset (guess) when running tests in series. In otherwords, on one particular chipset- when running all the tests in sequence, one score was adversely effected from having some sort of ramification from the previous test. This was *instantly* fixed and patched almost immediately. No such issue have I ever seen reproduced on a 3dfx or ATI Radeon of the time, only the GeForce/GTS cards.
If they write such code as a fallback for the vast majority of the market, it is interpreted as bias.
Here is another case where the theory is sound, but the real-world inconsistency doesn't match.
If "fallback" comparisons are to become the status quo, then this is something totally new for MadOnion. Again, simply reference the Nature test- which not only did not offer a fall-back execution mode for lesser featuresets but also attributed positively to the final "score" from which they use to make hardware upgrade "recommendations." Much like their XLR8R or System Analyzers, which would certainly make bogus recommendations based on erroneous at best data.
Look, writing something that takes specific advantages of PS1.4 and SHOWS AN ADVANTAGE is hard.
What's so hard about showing what multi-pass means to performance? I'm sure the same could be said about the original Banshee versus the Voodoo2. Hell, as long as you didn't perform any multitexturing, both were single pass in operation. Too bad there weren't any companies making benchmarks to specifically isolate optimal, emulated, single pass tests back then else the Banshee's would have likely outsold the V2's.
It's hard to pull off an effect that is immediately and dramatically better looking *AND* also guarantee that it will win in performance. In most cases, a PS1.1 version can be made to look just as good.
Your stipulation, not mine. The only lack of "guarantee" would be due to driver or hardware issues from ATI and not the basic principle of PS1.4 vs. 1.1.
Obviously, through selective example coding you can make a PS 1.1 "tech" demo look every bit as good as a PS 1.4 demo. You can do the same with and without HW T&L too and not "guarantee" any improvement if you approach it with that goal in mind. This is still no testament to the absence of HW T&L being no major benefit.
Carmack has already shown that sometimes paradoxical results happen. Sometimes 2 passes aren't naturally slower than 1. What if Carmack's "fallback" for the GF4 showed that the GF4 multipass beat the ATI PS1.4?
This would be a good event that will never be given the objective light of day. Instead, the correct angle will be the one being taken now- which is to simply discount/discredit any *possible* gains in the complete absence of any real data. This seems to be the beckoning call here anyways- and with MadOnion.
Isn't it the look and performance of the final result that matters, not how it is done? Gamers can hardly care about the elegance of collapasing passes, say, in Doom3. They care only about how good it looks and how fast it runs. If ATI's PS1.4 can't beat a GF4 doing identical output with multipass, it's not bias. It's factual data that shows that PS1.4 has no advantages and perhaps we should move on to something better, like PS2.0 or OpenGL2.0
I agree entirely here. Unless there is remarkably different resultant IQ, a gamer truly doesn't care how it was arrived- as long as it simply looks as good as Leading Brand X and does so without any cost in dropped effects, detail, IQ or likewise.
But this isn't the purpose of 3DMark2001. It has never been a tool to provide "look and feel"- at least not until now with it's (yet again) complete change in policy. If this is a new evolutionary step in the benchmark, I'd be all for it. But somehow, I doubt there will be any "fallbacks" or custom optimized code paths for alternate hardware in any DX9.0 or above version that may stem in the future. Just a gut feeling.
Cheers,
-Shark