3DMark type cheating in DX9/DX8/OGL games?

Brent

Regular
The type of cheating going on in 3dmark03 of detecting certain ps and vs and then replacing it with their own code that reduced iq to gain fps, am I correct in thinking they could do this on DX9 games as well?

Lets say the game was programmed in DX9's HLSL, then the exact same type of cheats could be done in actual games correct?

What about DX8 shaders like PS 1.1, could it be done with those type?

And then what about OGL?
 
Brent said:
The type of cheating going on in 3dmark03 of detecting certain ps and vs and then replacing it with their own code that reduced iq to gain fps, am I correct in thinking they could do this on DX9 games as well?

Lets say the game was programmed in DX9's HLSL, then the exact same type of cheats could be done in actual games correct?

What about DX8 shaders like PS 1.1, could it be done with those type?

And then what about OGL?

Maybe that type of "cheating" is already happening :?:
I read this at rage3D...

Quake III, FX 5900 30-40 % faster than 9800pro:

http://www.hardwarezone.com/article...aid=749&page=14

RTCW, 9800pro 30-40 % faster than FX 5900:

http://www.hardwarezone.com/article...aid=749&page=16


Both these games use the same engine, right? So how can be these results so different? Optimizing? Cheating? I have read some FX5900 reviews and noticed that it seems to beat 9800pro mostly in widely used benchmarks (Quake III, Serious Sam, UT 2003( some levels)... But When reviewers benchmark less used games, 9800pro and FX5900 are even or 9800pro wins.



It is certainly worth investigateing. ;)
 
nelg said:
Quake III, FX 5900 30-40 % faster than 9800pro:
RTCW, 9800pro 30-40 % faster than FX 5900:
Both these games use the same engine, right? So how can be these results so different? Optimizing? Cheating?

... or just different graphics content. With some apps content bottleneck can be fillrate, other has bandwith, still others have triangle rate. I dont know specifically where they are with RTCW and Quake, but definitely the engine alone is not the only factor that affects the performance on app side.

I actually would very much like to see some real synthetic benchmarking in some reviews. Particularly, how different aspects (?) of rendered content affect eath other. For instance, when measuring fillrate start to increase memory bandwith by increasing texture size, or start to increase geometry amount. So you can draw a couple of graphs of bandwidth vs. triangle rate etc. Pixel shader length vs. fillrate would be interesting for instance.
This would give a very good picture of what the card is _capable_ doing when maxed out, and where are the bottlenecks.
 
nelg said:
Maybe that type of "cheating" is already happening :?:
I read this at rage3D...

Quake III, FX 5900 30-40 % faster than 9800pro:

http://www.hardwarezone.com/article...aid=749&page=14

RTCW, 9800pro 30-40 % faster than FX 5900:

http://www.hardwarezone.com/article...aid=749&page=16


Both these games use the same engine, right? So how can be these results so different? Optimizing? Cheating? I have read some FX5900 reviews and noticed that it seems to beat 9800pro mostly in widely used benchmarks (Quake III, Serious Sam, UT 2003( some levels)... But When reviewers benchmark less used games, 9800pro and FX5900 are even or 9800pro wins.

It is certainly worth investigateing. ;)

as far as Brent's original concern was about illegitimate shaders substitutioning, quake-engine-based titles are immune - no pixel shaders in them. alas, their timedemos are susceptible to the illegitimate custom clipping planes & savings on clears just as much as 3dmark.
 
Brent said:
The type of cheating going on in 3dmark03 of detecting certain ps and vs and then replacing it with their own code that reduced iq to gain fps, am I correct in thinking they could do this on DX9 games as well?

Lets say the game was programmed in DX9's HLSL, then the exact same type of cheats could be done in actual games correct?

What about DX8 shaders like PS 1.1, could it be done with those type?

And then what about OGL?
Read what Tim Sweeney has to say about optimizations vs cheats on our front page. The fact that he said what he said means "It can be done", regardless of API. I'm pretty sure Tim wouldn't have said what he said if he didn't already know.
 
Reverend said:
Read what Tim Sweeney has to say about optimizations vs cheats on our front page. The fact that he said what he said means "It can be done", regardless of API. I'm pretty sure Tim wouldn't have said what he said if he didn't already know.

You just went a generic --and pretty important-- step further than Sweeney did. The above, intentionally or not, invites the reader to suppose that Sweeney has already caught one or more IHV's up to such shenanigans in his games, and for reasons of his own just does not talk about it publicly. From this are conspiracy theories born.
 
nelg said:
Quake III, FX 5900 30-40 % faster than 9800pro:

http://www.hardwarezone.com/article...aid=749&page=14

RTCW, 9800pro 30-40 % faster than FX 5900:

http://www.hardwarezone.com/article...aid=749&page=16


Both these games use the same engine, right?

Yes and no, the q3a benchmarks are often done with the original version (including the q3a scores at HZ), the engine has been extended since then, for instance when team arena was released they added support for terrain rendering. The fact that the performance has changed (decreased) with new point releases underlines that the engine changes.

Also the workload is completely different the q3a demos are done in the original and by today’s standard very simple maps, RTCW features much higher polycount these facts makes it impossible to predict RTCW performance by running Q3A benchmarks.
 
no_way said:
I actually would very much like to see some real synthetic benchmarking in some reviews. Particularly, how different aspects (?) of rendered content affect eath other.
I agree. It would be a whole heck of a lot better than the pseudo-synthetic benchmarking promoted by Futuremark.
 
darkblu said:
as far as Brent's original concern was about illegitimate shaders substitutioning, quake-engine-based titles are immune - no pixel shaders in them. alas, their timedemos are susceptible to the illegitimate custom clipping planes & savings on clears just as much as 3dmark.

.. plus things like downsizing textures, automatically applying texture compression or changing texture bpp etc.
 
Chalnoth said:
no_way said:
I actually would very much like to see some real synthetic benchmarking in some reviews. Particularly, how different aspects (?) of rendered content affect eath other.
I agree. It would be a whole heck of a lot better than the pseudo-synthetic benchmarking promoted by Futuremark.

Its tough to make purely synthetic tests look cool, though.

Though, truthfully, I can't imagine how futuremark gets anybody to buy their product, outside of reviewers. If anybody were to ask me, I'd suggest making graphically intense playable "little games" that target one bottleneck at a time. Cheezy sidescrollers, whacked out versions of minesweeper, mah jhong, or whatever else could be squeezed into a synthetic test without dilluting the bottleneck that they're targetting.

I'd need a reason to buy the benchmark and if it had some addictive games in it, that might do it.
 
Humus said:
Yes. It could be done in real games, DX and OGL.

then to me, as a reviewer, this whole thing gives me something to definitely watch out for in games, i didn't know that a video card developer in their drivers could replace shader code like that with their own

that is pretty sneaky, but now i'll have to really watch out for that and make sure the images look like they were intended to by the developer

i'm glad futuremark was able to investigate it and find out exactly what they were doing, it helps us all look for these things now in games, we know just how sneaky drivers can be

this right here shows you at least one need for synthetic benchmarks
 
Russ,
To me, personally using 3DMark is more a case of having bragging rights,
I think it's fun to participate in forums where scores are listed in peoples siggy's...etc. I was stoked when people started doing this. I think alot of people take it too far though, and purchase new hardware to get a higher score...., ever go to the Futuremark forums?....it's insane. They are totally 3DMark crazy...
 
Hmmm, even though I have been installing folding@home on any computer I can get my hands on, bragging rights just don't do it for me.
 
Look at the benchmark suite at sites, and that is where the problem lies...Code Creatures ?? What makes that a good 'benchmark', there is no game using that engine..not even using PS 2.0.

The lack of pixel shader powered games doesn't help.. :!:
 
RussSchultz said:
Hmmm, even though I have been installing folding@home on any computer I can get my hands on, bragging rights just don't do it for me.
I'm folding even on my Laptop, which heats it to the point that it can no longer can be used on-my-lap.
Just FYI.
 
Brent said:
then to me, as a reviewer, this whole thing gives me something to definitely watch out for in games, i didn't know that a video card developer in their drivers could replace shader code like that with their own

Nothing gets out to the card without first passing through the driver. Well, except if you're locking a onboard vertex buffer and retrieved a pointer to it, but that's just about it. The shader is written in standard ASCII text files. In OpenGL the ASCII is just fed directly to the driver, then it's up to the driver to take on from then. It can do just about anything it chooses to. In D3D the driver recieved an array of tokens (afaik anyway), but that doesn't make it any harder to replace it.
 
I also didnt expect the 'bragging rights' inuendo to really fly here with you.
I dont expect you to understand, I was only trying to get the point across that I think it's fun to share 3DMarks. There's nothing wrong with it.
 
Brent said:
that is pretty sneaky, but now i'll have to really watch out for that and make sure the images look like they were intended to by the developer
How do you intend to do that for every single game you benchmark? Send screenshots to each developer and ask "Is this EXACTLY what you expect in terms of IQ from your game?". You'll probably never get a reply -- developers, unless their your buddy, won't even bother to spend the time.

You could use the refrast but that will take a massive amount of time and you probably won't be able to control which exact frame you want to study IQ. And then there's OGL.
 
Reverend said:
You could use the refrast but that will take a massive amount of time and you probably won't be able to control which exact frame you want to study IQ. And then there's OGL.
Also, many applications don't allow you to select refrast as a rendering device.
 
Back
Top