Patrik Ojala's edited/revised thoughts on the fiasco

Hmmm... very informative, and it does change my view on this situation. If Futuremark does as they say and will continue to test drivers for application specific optimizations, this may return 3dMark to its position as a valid and unbiased benchmarking program. Let's hope they can do it. :)
 
Very informative, interesting and well-written, I agree. That doesn't change my view on the situation, though. What FM seems to say (feel free to disagree) is that Nvidia did not "cheat" per se, but apply optimizations that would be legal and welcomed for other software (games, for example), but are not for benchmarks.

As of now, the only thing that would restore my faith in FM would be if they issued a technical (as opposed to PR) paper explaining for each and every "former cheat" of both ATI and Nvidia why this "former cheat" is indeed a "slight optimization", as formulated by their PR person on ExtremeTech. I must say I would be pretty interested on a technical answer on the clipping planes, for example, and how said driver-level clipping planes taking advantage of know camera paths would be useful for gaming purposes.

As for FM considering including vendor-specific path in their code, why not, provided :
- the vendor, which would have to be a beta member, proposes a specific code path to FM. Driver-level stuff is a no-no, and any detection of driver-level stuff gets all scores of said driver set removed from the ORB, and an email fired to any person having posted such a score ("We are sorry, but the manufacturer of your video card chipset cheated, which is why we removed your score. Contact your video card chipset's manufacturers for further details").
- FM independantly analyzes the output of every proposed code path, which has to be a "generic inside the app" optimization (ie for a shader, the vendor-specific shader has to be a mathematically equivalent of the generic shader proposed by FM). This would allow for both instructions reorganization and partial precision to play a role, provided they produce the exact same output.
- all path-specific proposals are presented for all beta members to see and comment upon.
- FM has the final say about what makes it to the code and what doesn't
- the benchmark would propose both "use vendor path" and "use generic path" options in all its configuration, including the free downloadable version.

There is no way we can spare resources to check each released driver for optimizations.

Which is in direct contradiction with the PDF explaining why synthetic benchmarks are better, and can be interpreted as providing another escape way to close your eyes on future cheats by any company.

3DMark scores are only comparable if drivers perform exactly the work 3DMark instructs them to do.

We know that. But since you say that you won't investigage all future driver releases, how are we supposed to know that those scores are not comparable in the first place ? Do you realize that saying such a thing without dedicating yourselves to an extensive research of future cheats is indeed invalidating 3DM ? Both statements can't go together.

Many people have speculated that Nvidia would have paid us to publish the statement.

Actually, many people speculated that Nvidia would have either paid you or threatened you with legal action. I'm curious as to why you don't comment on the second part...

Even though it might feel unfair and wrong for us to approach Nvidia after all this arguing, this is vital for the continuity of our benchmark products.

Let me respectfully disagree here, Patric. Getting better with NV is vital for the commercial future of FM, either because of all the smearing they did (directly through company statements or through their "black ops PR specialists", the "guys with webpages"), or because of legal threats, no doubt about it. But "continuity" relies on much more than commercial future, it also relies on public acceptance of the product as an useful tool. I am not against synthetic benchmarks, I think they are useful tools to use in addition to in-game benchmarks, but I consider the specific software 3DMark2003 will be of zero value once ATI or NV release new drivers, especially if those give a large performance increase. This is not because of some (actual or pretended) opposition to synthetic benchmarks, but because FM destroyed their own credibility by calling blatant cheats (those clipping planes are still stuck in my throat) "slight optimizations". This indicates a desire to have a better relationship with a major IHV that goes way beyond any pretention at objectivity.

Call a cat a cat, a cheat a cheat, and you will restore this credibility.

I've even read some pretty absurd benchmark recommendations by real professional reviewers

We want names !!! Actually, if you refer to the various "guys with webpages", they hardly qualify as "professional reviewers" in the first place... :)
 
I must say I would be pretty interested on a technical answer on the clipping planes, for example, and how said driver-level clipping planes taking advantage of know camera paths would be useful for gaming purposes.

A racing game and/or tank/tank simulation, for example, might be useful to have clipping planes when drawing the sky, or the view out the periscope.
 
they sold themselves down the river and there is no way that I will use a benchmark that cannot peroperly test DXn functionality.

So what futurmark will allow nVidia to do is insert their own code path that my (I think most definately) lower image quality for speed. Nvidia has a very long HISTORY of killing IQ when they needed a speed boost.

ATi made their hardware to DX9.0 specs and so did nVidia, unfortunately for nVIdia when they DO run at spec they get their butts handed to them by ATi..............
 
RussSchultz said:
A racing game and/or tank/tank simulation, for example, might be useful to have clipping planes when drawing the sky, or the view out the periscope.

I think the only games that could truly benefit would be "rail shooters", like REZ or Panzer Dragoon.
But anyway, the clipping planes should be included at the game level, not at the driver level... Imagine for example that the designers release a patch allowing for a greater field of view, the clipping planes would become unusable and mess with the rendering...
 
CorwinB said:
But anyway, the clipping planes should be included at the game level, not at the driver level...

So you say that those clip-planes should be in 3dMark03 not in nVidia's drivers?
I agree.
 
So you say that those clip-planes should be in 3dMark03 not in nVidia's drivers?
I agree.

I say that the decision to include clipping planes and other culling techniques should never ever been taken at the driver level, but should be a designer's decision, be it for a game or for a benchmark.

If in the next release FM chooses to include clipping planes in their benchmarks, at least the workload for all chips will be the same. The benchmark would lose some of its "game-likeness", though.
 
nelg said:
In this new jargon just what would be considered a cheat then :?:
From my understanding of the situation, absolutely NOTHING is considered a "cheat" anymore....to many lawyer implications. Now they're all "optimizations" no matter what they are. :(

Question, how can we have any faith in a benchmark that squirrels around with semantics like that? What else are they going to bend for nVidia? :(
 
It doesn't matter what they call them (cheats or application specific optimizations), they're not allowed or the benchmark numbers aren't accepted by futuremark.
 
What I am trying to understand, is Futuremark lowering the bar are just simply changing semantics. Is there any violation that would now would be considered a cheat and not simply an optimization ?
 
On the face of it, it's just semantics, but there is some talk of making a useless benchmark to shut NVIDIA up as well.
 
Himself said:
but there is some talk of making a useless benchmark to shut NVIDIA up as well.

What talk? (Beyond the whole 'they caved!' accusation). What makes you suggest they're going to make a useless benchmark to shut NVIDIA up?
 
Read the press release again, it says they will consider evaluating some form of benchmark that allows for apples to oranges comparisons.. bleh
 
Himself said:
Read the press release again, it says they will consider evaluating some form of benchmark that allows for apples to oranges comparisons.. bleh
"Bleh" seems to be the right word to describe how you read the statement...

If you'd really read it, you can see that NVIDIA wants optimised rendering paths in 3DMark (like most games). Futuremark's reply is that they'll consider that option.

That doesn't mean that NVIDIA will get an optimised path and that all other cards have to use a general path. It will rather be an optimised path for every graphics IHV (perhaps as a non-default option?), if that becomes a reality that is.

Now where did you get that "apples to oranges comparisons"? :?
 
Bjorn said:
Well, 3D Mark 2003 are already not an exact apples to apples comparison so i don't see the problem with what they're proposing there. And this will be more in line with what SA proposed in this thread http://www.beyond3d.com/forum/viewtopic.php?t=6240&start=0.

Well, there is NVIDIA's not supporting 1.4 shaders in it's older cards so it's not apples to apples that way, ATI does 24 bit precision, NVIDIA does 32bit, so there is no common ground there, both have different ways of doing FSAA and Anisotropic filtering. If you look for all the ways that the cards are different you can't compare them at all.

You can't help the hardware differences, but creating a benchmark where you run a voodoo3 in 16 (22) bit vs a TNT2 in 32 bit isn't the solution to compensate either. You either test one or the other, you don't test separate things and try to equate them as being the same. For a game, it's simply the reality of the game, you are benchmarking the game performance, for a synthetic app, it's totally useless.

Borg, if all cards have thier own code to run, then it's even more widely divergent than before, if it wasn't apples to apples before, then you have just ensured it isn't and you don't even have the excuse of saying that it is how the thing will be like out in users hands like a game.
 
Force all the cards to run at DX9.0x levels that means

nVidia runs in 32bit FP (after all they pushed it as being superior to 24FP)
ati runs in 24bit FP. (after all they knew just like nVidia that DX calls for a minimum percision of 24bits)


Now is this means that nVidia's hardware sucks speed wise then so be it. Running the cards in DX9.0x compliant mode should tell you who has the better overall design and who will deliver the best DX9.0x gaming experience.
 
Back
Top