ExtremeTech: "nVidia's Risky 3D Optimization Gamble&quo

Interesting article, however I don't see the point in running the photoshop difference filter on the AA/AF NVIDIA image against the non-AA/non-AF ref image... ofcourse they'll be different.

Overall it came off a little 'preaching to the spanish inquisition'-like.
 
Like Zurich says, interesting, but it doesn't reveal anything new to me (been OD-ing on info surrounding this lately :oops: ).

The Photoshop trick made me cringe.. he's comparing a reference image with no AA/AF against an image with AA and AF, there are bound to be differences :? I must be missing something.. *looks at clock* ah-ha! I'm missing sleep :D

I'm glad they're taking a stand and highlighting the issue though, far too few sites seem to want to bring this to the fore and suggest even the possibility of wrongdoing. I know websites have to be careful what they say, but the softly-softly approach of some sites is maddening..

Atm there are no websites doing ATI vs nV comparitive card reviews that I trust anymore, and to be blunt, I'm getting really pissed off :devilish:

I'll check out the ExtremeTech recent reviews, I've overloooked them too frequently in the past ;)
 
Doing a diff between say, a previous driver version using the anti-cheat program (that it works on) and the new driver version on AA and AF would be most interesting I think...
 
Why are nVidia drivers cleared of the clipping plane optimization without evaluating the characteristics of performance for the off-rail behavior more clearly?

First, the simple possibility: If there is a drastic and unique dip in fps for any off-rail behavior from any position, it seems pretty clear that the clipping plane hack was simply hidden by detecting off-rail behavior and shutting off.

Some less simple possibilities:

  • There might a smarter way of applying pre-calcuated clipping planes that are still pretty easily dtected. It seems to be done with a vertex shader, and a more extensive one might only be exposed at particular positions, or by modifying the Antidetector script to only target the vertex shader, if it can be made to work with the drivers in question. Also, a very complex pre-computed clipping plane solution could be used that is either invalid because it can't be done on more complex scenery than 3dmark 03 (and is therefore is precluded from being useful for games in general, like the "on-rail" clipping plane cheat, except more sophisticated in its deception) or a version of the "new shader" possibility below (i.e., like ATI's actions except for refusing to come forward about it, and conceivably even without precision abuse) that just requires a whole heck of a lot more work to be implemented for a special case. The latter case would mean it is used in game benchmarking as well, but maybe only for restricted scenes and high priority titles depending on the time required to pre-compute and storage requirements in the driver.
  • A new shader, more like ATI's past 3dmark 03 cheat, is being used. This is a drastic improvement, except for the complete lack of disclosure by misrepresenting of what they're doing, and possible special case abuse of the DX precision specifications by misbehaving within API specs (i.e., fp16 use and not admitting it would be what differentiates them from ATI's behavior with Cat 3.4 and 3dmark 320). This seems to be Extremetech's assumption, and except for failing to discuss the "simple possibility" above, it doesn't seem unreasonable for them to propose.
  • A new way of cheating that needs new discovery. This ranges from: something completely unexpected that we might not uncover immediately (i.e., something useless to assume until it is uncovered)...to a smarter, more general case, detail dropping (different from the "new shader" case in that the image quality equivalence is based on outright omitting workload instead of replacing with a special case version).

The possibility of "not cheating" (for a benchmark) seems precluded by the types of behavior the anti-detector script defeated for nVidia drivers (application specific detection) and the score differences when it was used. There are other possibilities, like a dynamically calculated clipping planes that don't depend on pre-computation (i.e., more like HSR), but why would it be applied by application detection to 3dmark 03? The idea of "more complex pre-computation" seem to fit the observations and past behavior pretty strongly without indication otherwise. Right now (AFAIK) we don't even have the first simple possibility eliminated.
 
grrrpoop said:
The Photoshop trick made me cringe.. he's comparing a reference image with no AA/AF against an image with AA and AF, there are bound to be differences :?

I got the impression that that was the point they were trying to make - That comparing refrast images against images outputted from any card isn't a fair way to compare. They probably could have worded it better though.
 
That was the problem with their first cheating article. Needed better wording. Also needed more proof. :rolleyes:

The only good ways to compare IMO are the animations... flipping back and forth. PNG can do this, in full quality too I think. It's much better than the Photoshop thing anyway. It can work, but most people don't get it and it's not as good as what the naked eye can see.
 
Back
Top