Different Ways to Cheat ?

nelg

Veteran
Ratchet said:
DaveBaumann said:
When a timedemo is running 200-300 frame a second, I'll be damned if I could see things such as frame being dropped or other issues.
That's the exact thing I thought as well.

I'm no programmer, but after seeing what nVidia did to 3Dmark03 I wouldn't think it would be too hard for them to enable some cheats (discard some textures, turn off trilinear/anisotropic, etc) when running a common demo or game in benchmark mode.

After reading the above quote in another thread, I was wondering in which ways could you cheat in the various benchmarks. Not just game time demos but tools like 3Dmark, Shadermark, FSAA viewer from Colourless etc.
 
nelg said:
Ratchet said:
After reading the above quote in another thread, I was wondering in which ways could you cheat in the various benchmarks. Not just game time demos but tools like 3Dmark, Shadermark, FSAA viewer from Colourless etc.


One way of cheating, at least in 3dmark and shadermark , would be to change the original code into something more "suited" to your card...

Ie. ATI's re-ordering of the shader code in GT4, but only on a (possibly) larger scale...

I'm not expert on this (all I know about this is learnt from lurking this forum), but I'll take a shot...

Depending on how close scrutiny it's percieved to come under, it could be anything full precision for the first couple of frames, then dropping precision quite severly (suggested by someone in another thread, sorry, can't remember who), to replacing the entire code with something that looks somewhat similar...

Expanding on the precision dropping, instead of dropping precision for every from except the first couple, maybe alternate between regular and low precision with something like out of 100 frames, 60 would be low precision (but intermixed in a way that noticing it would be very hard)...
The trick would be find the ratio that gives you the biggest performance boost, whilst still bearing up under semi-close scrutiny...

If you replace the entire code with something hand optimized for your card, whilst still looking more or less exact like the original, should also be able to bear quite close inspection...
(By using propietary extensions you'd be able to streamline the code for your card much more than would be possible by using the standard extensions (if you haven't built your card exactly to spec that is)...)

Sorry if this is very n00bish, but I'm doing my best to learn more! :)
 
There is also hardware cheating, no? Like not being consistant in how you set up the machine to be benchmarked. Defrag hard drive, bios settings ala FSB settings, memory timings, etc.

Many of the PC benchmarks can be altered with just a few tricks...

Sort of like taking stock Camero and Mustang to the drag stip. Then, when the other guy hits the bathroom the Mustang driver adjusts his timing...
 
As a sort of proof of concept, I considered the idea of a device that would plug into a DVI output but, instead of displaying the frame data, record it (e.g. on a hard drive) for future scrutiny. Unfortunately, a single frame at 1600x1200 is 7.32MB of raw data; run that at 100fps, and you've got 732MB/s, which is a) pretty damn amazing when you think about it, and b) well beyond the abilities of today's hard drives. Maybe ~10 hard drives in a RAID0 config, but that's really pushing it. Point is, nothing like that is ever going to get made.

(Actually, a RAM disk would work nicely, if you don't mind requiring a capacity of ~50GB per minute, which would get rather expensive rather quickly.)

But I don't think we need one in order to catch the sort of cheating MrGaribaldi is talking about. In fact, I don't think alternating or randomly switching between "good quality" and "cheat quality" rendering is a really smart way to cheat; for one thing, the visual problems might be more noticeable to the naked eye than merely rendering the entire thing in "cheat quality" (particularly if the camera were moving slowly or not at all), and for another, taking a few screenshots would eventually land on one of the "cheat quality" frames and thus the gig would be up. (Unless, of course, the card drivers essentially only allowed screenshots to be taken of "good quality" frames. Of course along those lines, the card could merely render in "cheat quality" all the time, and simply rerender a frame in "good quality" whenever a screenshot was requested.)

As for the original question...basically any cheat imaginable can be done if the driver is capable of recognizing the benchmark in question and triggering particular special-case code. That gives us two courses of action: either make benchmarks unrecognizable as such, or make the benchmark workload sufficiently unpredictable that the only way for a card to do well in all cases is to actually be capable of rendering legally at full speed. Unfortunately, I'm not sure if there is a good way to make something like Colourless' FSAA app "sufficiently unpredictable" (although perhaps with enough tricks it could be made difficult for the drivers to recognize it)

In the end, the surest way would be if Nvidia, ATI, etc. were to open source their drivers. (The assumption being that building cheats directly into the hardware would not be very effective.) Of course, the chances of that happening are probably worse than the chances that someone will develop a cheap new external hard drive that records at a GB/s and plugs into your video card...
 
As a sort of proof of concept, I considered the idea of a device that would plug into a DVI output but, instead of displaying the frame data, record it (e.g. on a hard drive) for future scrutiny. Unfortunately, a single frame at 1600x1200 is 7.32MB of raw data; run that at 100fps, and you've got 732MB/s, which is a) pretty damn amazing when you think about it, and b) well beyond the abilities of today's hard drives. Maybe ~10 hard drives in a RAID0 config, but that's really pushing it. Point is, nothing like that is ever going to get made.

I was thinking about that very thing... I think we do need a device like that though to get truly accurate screencaps to allow us to do apples-to-apples IQ comparisons between cards, though (the AA supposedly not showing up on NV3x cards when they were first reviewed comes to mind, along with the blurry screenshot AA that was not actually blurry).

Yeah, it'd be bad for cheat comparison purposes, but it'd be excellent for IQ, diagnostic, and Other Random Purposes.
 
If the image quality is identical and unchanged, why is replacing shader or other code with more streamlined or more specific versions cheating? Why is recognizing a benchmark or game as such and adjusting the driver accordingly, so long as there is no image quality degradation, cheating?

:?:
 
It's not cheating when done right if no matter the input it always has the same output then the new algorithym is ok especially for games. The problem we are having is that they arent just making better algorithyms they are replacing the code with something that makes general simalar looking outputs but only under certain curcumstances and even then they are only similar like the water on the gt4 demo or the water in Splinter Cell (sc to a lesser extent until we find the truth behind that prob). I personally dont have a prob with what ati did by reordering and replacing their code with an algorithym that has the same output, but I think they should have just given the algorithym to futurmark in the first place or not put it in at all. If they would have just done this to a game though I have no probs it just shouldn't be done in a sythentic benchmark.
 
Does anyone think we might see cheats like automatically reducing FSAA levels when higher screen res.'s are used ? Why I started this thread was to see what creative ways could be emplyoed to cheat, so as to know what to look out for.
 
Bigus Dickus said:
If the image quality is identical and unchanged, why is replacing shader or other code with more streamlined or more specific versions cheating? Why is recognizing a benchmark or game as such and adjusting the driver accordingly, so long as there is no image quality degradation, cheating?

:?:

Because it's a synthetic benchmark. One of the goals of something like 3DMark is to run the same code on different hardware so you can compare them, right? When drivers replace or alter a shader, not all cards are running the same shaders, and a comparison is no longer valid. One of the things FM talked about was cards doing the same amount of work. If all cards aren't running the same shaders, how could they be doing the same work?
 
Does anyone think we might see cheats like automatically reducing FSAA levels when higher screen res.'s are used ? Why I started this thread was to see what creative ways could be emplyoed to cheat, so as to know what to look out for.

No. Because with a screenshot at 1600 or 2048, you can still see jaggies unless you use AA.

Admittedly, the jaggies are REALLY FARKING SMALL, but they are there if you zoom in on the edges.
 
David - what about a utility that only capture 5% of frames - 3% user selected and 2% random and you did a before after image quality check automatically (subtracting each image from a reference image set provided by the benchmark team and verified for accuracy.).


Statistically that would give you excellent and directable analysis tool without the mind boggling capture overheads unless you wanted that 5% to be contigous frames at some point in a benchmark.
 
Bigus Dickus said:
If the image quality is identical and unchanged, why is replacing shader or other code with more streamlined or more specific versions cheating? Why is recognizing a benchmark or game as such and adjusting the driver accordingly, so long as there is no image quality degradation, cheating?

:?:

IMHO when a known benchmark is run, the results should be able to be properly interpreted by the benchmarker. Even if the image quality is identical and unchanged, replacing shader or other code raises the question of exactly what is being measured.

If the purpose of the benchmark is to exercise certain specific capabilities of the hardware, then obviously changing the code distorts that interpretation of the results. Of course there may be more efficient optimizations which are possible of the code. But I'd argue that in some cases, that is missing the point.

I don't particularly mind it if an IHV demonstrates that some program could have been coded more efficiently for their hardware. But they need to be open about it so that the results can be intelligently discussed.
 
Bigus Dickus said:
If the image quality is identical and unchanged, why is replacing shader or other code with more streamlined or more specific versions cheating? Why is recognizing a benchmark or game as such and adjusting the driver accordingly, so long as there is no image quality degradation, cheating?

:?:

Here is my take that I've offered a few times.

Expanded upon at Rage3D.
 
g__day said:
Statistically that would give you excellent and directable analysis tool without the mind boggling capture overheads unless you wanted that 5% to be contigous frames at some point in a benchmark.
The problem is that this still isn't good enough for one simple reason: how do you capture the frames? Do you make a call to lock the backbuffer in DX, or glReadPixels in GL? If so the driver knows you are about to read the frame and can potentially undo any cheating!

And because of the significant time overheads of ReadPixels, Lock, etc. the driver fundamentally has just about as much time as it wants.

Capturing the DVI is better. But I can think of issues with that too.

And I seem to be starting most of my sentences with prepositions today...
 
Bigus Dickus said:
If the image quality is identical and unchanged, why is replacing shader or other code with more streamlined or more specific versions cheating? Why is recognizing a benchmark or game as such and adjusting the driver accordingly, so long as there is no image quality degradation, cheating?

:?:

<post mode="demalion">
Here are some thoughts I had on the subject.
</post>
 
g__day said:
David - what about a utility that only capture 5% of frames - 3% user selected and 2% random and you did a before after image quality check automatically (subtracting each image from a reference image set provided by the benchmark team and verified for accuracy.).

Statistically that would give you excellent and directable analysis tool without the mind boggling capture overheads unless you wanted that 5% to be contigous frames at some point in a benchmark.

Store HASH VALUES of framebuffers during benchmark run, and then go back and examine quality of any of the frames individually, while verifying that the hash values match ones of the performance benchmark run.
 
Unfortunately, vsync is off during performance runs, so a hash value of a 'whole frame' is some number of composite frames....
 
Back
Top