My thread @ Futuremark(Re =Waite for Unwinder)

Quote from Unwinder
I really RE'd both Detonator and Catalyst and found application detections mechanisms in each of the drivers. I really created the scripts to prevent the drivers from detecting D3D applications (the scripts block pixel/vertex shader and command line/window text checksum calculation in the Detonator and texture pattern/pixel shader code detections in the Catalyst).
Blocking application detection code caused dramatic performance drop in 3DMark2001/Nature on both NV (74->42 FPS) and ATI (66->42 FPS) boards. IQ in this test changed on both of the systems. Do I have to continue?
NVAntiDetector also caused significant performance drop in other D3D benchmarks (i.e. UT2003), 3DMark2003 score on NV35 dropped even more then with 330 patch (it's info from my tester and I cannot confirm it because I don't have NV35 sample).
Review containing details and benchmarks is preparing for publishing on Digit-Life now.

Conclusion:
My trust to NVIDIA and ATI PR is almost equal to 0 now. Both of them seem to use the same benchmark 'optimization' techniques, but NVIDIA promotes it as 'application specific optimizations', ATI simply tried to appear innocent, but both are fooling us for a long time. 3DM2001/Nature was de-facto in estimating PS performance, but both IHVs show distorted benchmark results by altering rendering code. And it’s very sad.
 
Where are the guys who let us believe ATi was the good and NV the bad ?
I dont see many comments, yet on NV cheats there were zillions (comments and threads) :)

I wonder what hellbinder will say, oh, lemme guess:

"ATi had to do those tricks because nVidia made them"

there ;)

Fanboys are fanboys, afterall.
 
I thought this was forthcoming....or not:
Blocking application detection code caused dramatic performance drop in 3DMark2001/Nature on both NV (74->42 FPS) and ATI (66->42 FPS) boards. IQ in this test changed on both of the systems. Do I have to continue?

I'd like to see those IQ differences.
 
StealthHawk said:
I thought this was forthcoming....or not:
Blocking application detection code caused dramatic performance drop in 3DMark2001/Nature on both NV (74->42 FPS) and ATI (66->42 FPS) boards. IQ in this test changed on both of the systems. Do I have to continue?

I'd like to see those IQ differences.
Waite for the Digit life article....
 
K.I.L.E.R said:
all of u owe me another apology. :p

Dave H, remember our little discussion a while ago? I told ya so. :p

No, actually I don't remember. Could you refresh my memory?

I promise to be mortified! ;)
 
Interesting. There are some significant things missing still, though, atleast in the English commentary on the topic..

What kind of detection mechanism was defeated? Shader file name, or actual code analysis effort? "command line/window text checksum calculation" is pretty clearly a completely invalid way of modifying shader behavior for a benchmark, but "texture pattern/pixel shader code detections" is not so clear. Is it a tiered "turn on this optimizer functionality when you detect this application" approach or a tiered "replace this shader completely with this shader sequence stored in the driver"? Maybe just a gap in communication...does "pixel shader per-byte comparisons in D3DDP2OP_CREATEPIXELSHADER token handler in the latest Catalyst" mean tokens are being analyzed byte by byte and changes are made based on that, or that the entire shader is being processed byte by byte for determining a checksum, and then the shader is being replaced completely?

Are there clipping plane shortcuts involved with both company approaches? Neither of them? This is a pretty major part of the prior issue discussion...do we assume nVidia did nothing or he would have mentioned it, or does he just consider that detail irrelvant because in his opinion any application detection is equally bad? Or maybe ATI did clipping plane cheating as well and he has that specific reason to consider them both equally bad and not mention the detail?

What is the difference in image quality? Higher, lower? By what criteria? Since this is the most direct of the important issues, and I can hardly believe this won't be a focus of the article, I'm pretty confident that atleast one of my questions will be answered in it. But I don't know why the others haven't been answered yet...the precedent of the major issues in IHV application detection has already been established.

Not a prohibitive amount of things to specify, and they are all important.

The investigation of nVidia and ATI application detection is good, but the incomplete detail is disturbing. Application based clipping plane adjustment (which both may be guilty or innocent of in this case) is not the same as shader adjustment...that entire "both are cheating, so they are just as bad, even though one can possibly help in games and the other can't" simplification I detest. Except in this case we have statements that image quality is different for both IHV, yet we don't have an easy explanation of what was done readily evident (neither set of cards can lower the spec below PS 1.0 precision/range specification, AFAIK, so what they are doing requires more analysis to determine if the optimization itself is invalid, even if the triggering method is clearly wrong).

I did notice references to shader programming optimization discussion in the Russian thread, which leads me to believe that these questions might already be answered for people who can read Russian. Babelfish doesn't handle the Cyrillic characters at all, unfortunately, or I'd be browsing from page 30 on right now looking for answers...maybe I'll go try some other translation engines or go hunt for some screenshots in those pages.
 
At least the equipment and drivers are current, and there are well informed people willing to delve into the problem, unlike the whole Quack/Quake issue way back when. The surviving analyses weren't terribly in depth or satisfying in determining what the real deal was.

Hopefully level headed people free of overwhelming bias to prove/disprove their favorite IHVs guilt will be persistant enough to get to the bottom of the issue.
 
Mummy said:
Where are the guys who let us believe ATi was the good and NV the bad ?
I dont see many comments, yet on NV cheats there were zillions (comments and threads) :)

I guess it's all about order of magnitude - nVidia's cheats seem to have been bigger, bolder and more widespread.

All the same, I think this leaves a bad taste in everybodies mouth about the industry in general.
 
Hanners said:
I guess it's all about order of magnitude - nVidia's cheats seem to have been bigger, bolder and more widespread.

Sure, that's true, but if unwinder findings confirm ATi cheats, which (from what we can read from the english post) seem to lower IQ as well, you understand that the whole "Ok we cheated but we are good boys afterall" bullshit ATi pr stated will be all lies ?

All the same, I think this leaves a bad taste in everybodies mouth about the industry in general.

What did you think?
Welcome to the real world neo :)

Like i said in a post months ago, world is evil, everybody is like that, i cant understand how some of you got so convinced about the morality of a certain company, there are million of dollars in the game, there are ppl trained to spread bullshit, lies, false advertisement (read PR guys), ATi has them too, like NV (NV ones are "better", i agree on this), like Trident, (the 3dmark trident cheats were really much fun, we saw what their PR meant in the interview about the relative speed against Gef4/R300 :LOL:)
like every other company...

Only fanboys can trust PR guys, it just ends with the classic Team X vs Team Y fight (you can replace X with ATI and Y with nVidia or reverse), its like soccer, so which shirt you wear? :)

After the demise of 3dfx, i always owned nVidia cards, why? because i honestly thought they were better, i started to like ATi when they made R200, now with R300 they surpassed nVidia (again IMO), so i bought an R300, i dont give a fuck about who is who, i just buy the product i believe is the best and if i were to judge a product looking at a company morally i'd probably go live in a cave with brown bears, at least animals dont have to do PR :)
 
Mummy said:
Sure, that's true, but if unwinder findings confirm ATi cheats, which (from what we can read from the english post) seem to lower IQ as well, you understand that the whole "Ok we cheated but we are good boys afterall" bullshit ATi pr stated will be all lies ?

ATi only really stated anything about the 3DMark 2003 optimisations, which they said they would remove in the next driver release. I'm hoping we'll see them take the same line on these other cheats that have come to light in their drivers.


Mummy said:
Only fanboys can trust PR guys, it just ends with the classic Team X vs Team Y fight (you can replace X with ATI and Y with nVidia or reverse), its like soccer, so which shirt you wear? :)

After the demise of 3dfx, i always owned nVidia cards, why? because i honestly thought they were better, i started to like ATi when they made R200, now with R300 they surpassed nVidia (again IMO), so i bought an R300, i dont give a fuck about who is who, i just buy the product i believe is the best and if i were to judge a product looking at a company morally i'd probably go live in a cave with brown bears, at least animals dont have to do PR :)

I agree with you on the not trusting PR guys and not purchasing products based on the companies morals thing, but that's not really the problem here - The issue now is that we can't trust the benchmarks and reviews that used to play such a big role in choosing which card to purchase.

How can you choose which product is going to be the best for you when both the major IHVs are pulling the wool over your eyes?
 
ATi only really stated anything about the 3DMark 2003 optimisations, which they said they would remove in the next driver release

That's exactly the problem, they got busted, they admitted, they did the good boys, while they were still cheating in 3DMark01 err what?....

How can you choose which product is going to be the best for you when both the major IHVs are pulling the wool over your eyes?

Well, even if i like 3DMark synthetic tests (not much game tests), the best way to judge is and always been with games, i like game1,2,3,4,5 and R300 is faster in those rather then game 6,7,8,9,10 where nV wins, so i get R300 f.e.

3DMark doesnt reflect much game scores, but that's simple, its a program that throws a certain set of data to your card, with certain shaders, certain renderstate and so on, it will differ from game 1,2,3,4 which will throw different data to your 3d card, with different shaders and so on, so you cant really compare much at end.

You can make previsions like, in Pixel Shader 2.0 tests R300 wins against GefFX, so u might assume that an intense PS2.0 will be faster on R300, but of course, that might not be the whole picture, so at end, u just test the card with that game and game is over :)
 
I would be very interested in two things with this process:
1) The IQ differences explored. This would be the highest priority- as in researching the differences and extrapolating that these are 100% due to the application detection and not a bug as a side effect from the process (i.e. possibly mistake in props/caps from the wrapper)

2) Creating a "control" wrapper to isolate how much of the performance difference might be caused by the wedge itself. I have NO idea how Unwinder's tool works, but if it's anything like past attempts from others- it creates a wrapper wedge between application and API- which in turn adds an extra depth of indirection between every API call. This does cause a very measurable drop in performance in itself. A meatless version of the same (same wedge and indirection, but does absolutely nothing) should be benchmarked to remove overhead from the performance results (IF this applies to the implementation of the tool).
 
Mummy said:
Well, even if i like 3DMark synthetic tests (not much game tests), the best way to judge is and always been with games, i like game1,2,3,4,5 and R300 is faster in those rather then game 6,7,8,9,10 where nV wins, so i get R300 f.e.

But it looks like nVidia are cheating in timedemos as well - So now what?
 
Mummy said:
Where are the guys who let us believe ATi was the good and NV the bad ?
I dont see many comments, yet on NV cheats there were zillions (comments and threads) :)

I wonder what hellbinder will say, oh, lemme guess:

"ATi had to do those tricks because nVidia made them"

I believe that to an extent. Cheat or get left behind... or try and expose the cheating and hope it doesn't come back to bite you in the ass. Still... worthless commenting before we know the nature of the optimisations (which was the case with the 3DM hacks).

I think ATi's app-specific optimisations are largely restricted to the D3D driver, BTW.

MuFu.
 
I think optimizing for benchmarks is ok, even a positive thing, the more driver testing the better, so long as it's generic optimizations that will show up elsewhere. If it's just hacking benchmarks then it's useless.
 
Mummy said:
Hanners said:
I agree with you on the not trusting PR guys and not purchasing products based on the companies morals thing, but that's not really the problem here - The issue now is that we can't trust the benchmarks and reviews that used to play such a big role in choosing which card to purchase.

How can you choose which product is going to be the best for you when both the major IHVs are pulling the wool over your eyes?

Well, even if i like 3DMark synthetic tests (not much game tests), the best way to judge is and always been with games, i like game1,2,3,4,5 and R300 is faster in those rather then game 6,7,8,9,10 where nV wins, so i get R300 f.e.

3DMark doesnt reflect much game scores, but that's simple, its a program that throws a certain set of data to your card, with certain shaders, certain renderstate and so on, it will differ from game 1,2,3,4 which will throw different data to your 3d card, with different shaders and so on, so you cant really compare much at end.

You can make previsions like, in Pixel Shader 2.0 tests R300 wins against GefFX, so u might assume that an intense PS2.0 will be faster on R300, but of course, that might not be the whole picture, so at end, u just test the card with that game and game is over :)

Mummy,

I don't think you understand what Hanners was trying to convey. I think I summed up his idea in a message I posted in the Shadermark thread in the 3D Technology forum.

AzBat said:
Now you're starting to see the reasoning for getting upset with the 3DMark cheats. It doesn't matter if it's a synthetic benchmark or you think it's worthless or you think it doesn't represent games or that Futuremark is no longer calling it a cheat. Doing the kind of cheating/optimizing they did in that one application is enough to distrust any performance metrics in all applications, even games. NVIDIA should be ashamed. And anybody that based their buying decision on those inflated results should really consider a class-action lawsuit for fraud.

As you can see, the problem is not with just 3DMark, but all applications used for benchmarking.

Tommy McClain
 
Time for a clean slate.

Hell, I'll forgive either/both of them any past cheats...as long as there are NO MORE!

Can't change the past, but the company or companies that step forward and say they won't do it anymore and ACTUALLY FOLLOW THRU ON THAT is going to be the company that gets my dollars.
 
Re: Time for a clean slate.

digitalwanderer said:
Hell, I'll forgive either/both of them any past cheats...as long as there are NO MORE!

I would too. But...

digitalwanderer said:
Can't change the past, but the company or companies that step forward and say they won't do it anymore and ACTUALLY FOLLOW THRU ON THAT is going to be the company that gets my dollars.

Unfortunately I can't see any company actually doing unless forced by a court of law. It's going to take them loosing a court case on the matter before they actually do it and stick with it.

Tommy McClain
 
Back
Top