New 3DMark03 Patch 330

Inlight of this new info would [H] say that the 9800pro and the 5900 ultra are both fine cards for todays games but the 9800pro is far better suited for future games. Afterall GT1-3 better reflect current game performance while GT4 reflects future game performance. Will we see [H] rewrite thier review of the NV35. :?:

Also I do not follow this line of reasoning that if you cannot see it, it is not a cheat. If you put two different cars on a dynometer both go nowhere but one will make more power than the other. DO YOU THINK THAT INFO WOULD BE IMPORTANT !
 
BTW just seen this at HotHardware

"Since NVIDIA is not part in the FutureMark beta program (a program which costs of hundreds of thousands of dollars to participate in) we do not get a chance to work with Futuremark on writing the shaders like we would with a real applications developer. We don't know what they did, but it looks like they have intentionally tried to create a scenario that makes our products look bad. This is obvious since our relative performance on games like Unreal Tournament 2003 and Doom 3 shows that the GeForce FX 5900 Ultra is by far the fastest graphics on the market today."
 
RussSchultz said:
I'm sure I'll get roasted for this, but...

I haven't looked at the PDF, but I wonder what they did to avoid 'detection'?

Well, try reading the PDF. ;)

FutureMark said:
We have used various techniques to prevent NVIDIA drivers from performing the above detections. We have been extremely careful to ensure that none of the changes we have introduced causes differences in either rendering output or performance. In most case, simple alterations in the shader code – such as swapping two registers – has been sufficient to prevent the detection.

Going beyond that, I wonder what, if anything might have upset the apple cart--changing the scores in ways beyond simply turning off the detection stuff.

If its something like re-ordering calls, or load order or something that, even though it generates the same results, changes the method of going about it this could upset the apple cart that could contribute to the depressed scores.

Except that the thing is, the scores now reflect the "original" performance of the hardware, before the "cheats" were introduced. (Such as PixelShading performance being cut in half.)

Perhaps I'm jaded, but I'm suspicious of any high profile app being used as a benchmark.

Which is really the lesson for us all. However, I am LESS jaded by a high-profile app being used for a benchmark, if they are taking active measures to expose / prevent cheating, than I am of a benchmark where it's basically ignored.

My take on optimization: if the card can detect it in a way that doesn't target a particular application (i.e. benchmark), I'm ok with the card optimizing shaders, reordering render order, etc as long as the output is what is expected. I'm ok with this even if the optimization is happening within the benchmark, AS LONG AS ITS NOT SPECIFICALLY TARGETTED TOWARD THE BENCHMARK.

Not sure I follow you. Define "optimizing shaders". If that means "identify a shader, and replace the sader code", then I disagree. If that means "I get this stream of data, oblivious to the source of the data, and I reorder it", then I agree.

But I don't see how disabling detection routines would harm an "optimization" of the latter.

If one dufus stuffs their shaders with nops, then somebody else will too. If the driver can fill open execution slots by combining instructions or re-ordering them--I'm all for it.

I'm for it...if it can be done in a generic way. There should be NO need to detect a shader program or an application.

If your hardware is SO sensitive to the things like "swapping registers", and in order to "optimize" for such thingit, you must know ahead of time what the code is ultimately doing, then your hardware deserves to be penalized.
 
nelg said:
BTW just seen this at HotHardware

"Since NVIDIA is not part in the FutureMark beta program (a program which costs of hundreds of thousands of dollars to participate in) we do not get a chance to work with Futuremark on writing the shaders like we would with a real applications developer. We don't know what they did, but it looks like they have intentionally tried to create a scenario that makes our products look bad. This is obvious since our relative performance on games like Unreal Tournament 2003 and Doom 3 shows that the GeForce FX 5900 Ultra is by far the fastest graphics on the market today."

So lemme get this straight, Nvidia's claiming that since FutureMark is intentionally trying to make them look bad, they have the right to not only superced FM's code, but replace it with code that DOESN'T PRODUCE THE SAME RESULT? And that STILL doesn't address the static clip planes. My God, someone's been smoking something hallucinogenic again.
 
All modern PCs have out of order execution and register aliasing. These aren't cheats, but bonifide optimizations to keep their execution units busy. The assembler we use for our DSP breaks apart VLSI instructions and reorders them for us. (Generally, we're happy about it, though occasionally we do need to turn if off for some specific reasons) It does a good job and is vigilant, whereas the coder many times is not.

And yes, like I said, as long as you don't depend on detecting "Oh, its 3dmk3 running, I need to change the third shader loaded to X", I'm all for the driver tearing apart shaders, re-odering them, reconstituting their function, as long as the output is the same as before.
 
Nazgul said:
nelg said:
BTW just seen this at HotHardware

"Since NVIDIA is not part in the FutureMark beta program (a program which costs of hundreds of thousands of dollars to participate in) we do not get a chance to work with Futuremark on writing the shaders like we would with a real applications developer. We don't know what they did, but it looks like they have intentionally tried to create a scenario that makes our products look bad. This is obvious since our relative performance on games like Unreal Tournament 2003 and Doom 3 shows that the GeForce FX 5900 Ultra is by far the fastest graphics on the market today."

So lemme get this straight, Nvidia's claiming that since FutureMark is intentionally trying to make them look bad, they have the right to not only superced FM's code, but replace it with code that DOESN'T PRODUCE THE SAME RESULT? And that STILL doesn't address the static clip planes. My God, someone's been smoking something hallucinogenic again.

is it 4:20 yet? ;)
 
Perhaps Ostol, and i don't know Nazgul. I'm just asking how the information/new version are dispatched in the beta group.

If it's just a relationship between each member and FM, then ET/B3D won't ever be able to talk about issue discovered by other member.
 
This is obvious since our relative performance on games like Unreal Tournament 2003 and Doom 3 shows that the GeForce FX 5900 Ultra is by far the fastest graphics on the market today

UT 2003 doesn't use Pixel Shaders to any extent, and Doom 3 is allowed to run lower precision modes with proprietary code paths...Neither one of them are valid arguements.

If the 3Dmark 03 is written in DX9, how can it show any bias as there is only one standard. :?:
 
RussSchultz said:
And yes, like I said, as long as you don't depend on detecting "Oh, its 3dmk3 running, I need to change the third shader loaded to X", I'm all for the driver tearing apart shaders, re-odering them, reconstituting their function, as long as the output is the same as before....And yes, like I said, as long as you don't depend on detecting "Oh, its 3dmk3 running, I need to change the third shader loaded to X",

I think we're in agreement.

The thing is, I don't really see how any "application detection defeat" mechanism would "defeat" a legitimate optimization of the type we're talking about.

In your previous example:
If one dufus stuffs their shaders with nops, then somebody else will too. If the driver can fill open execution slots by combining instructions or re-ordering them--I'm all for it.

What if, FM is one of these dufuses....and they sttuffed a shader with nops. nVidia's driver "optimizes" that and increases performance.

If FM can "defeat" the optimization by say, adding ONE MORE nop...and the optimization fails....then the original optimization is not a very robust solution, and may as well be application specific.
 
Since NVIDIA is not part in the FutureMark beta program (a program which costs of hundreds of thousands of dollars to participate in) we do not get a chance to work with Futuremark on writing the shaders like we would with a real applications developer. We don't know what they did, but it looks like they have intentionally tried to create a scenario that makes our products look bad. This is obvious since our relative performance on games like Unreal Tournament 2003 and Doom 3 shows that the GeForce FX 5900 Ultra is by far the fastest graphics on the market today."
These guys never cease to Sink to all new lows every chance they get. Is there anyone here who can seriously defend this completely Dishonest and Unethical Company??? It really Disturbes me that a group of people can LIE LIE LIE LIE constantly all the time.

How do these people live with themselves... Sickening.
 
Brent said:
is it 4:20 yet? ;)

heh, not my bag. but apparently it's Nvidia's :)

I still can't believe Nvidia responded the way they did, it's probably the worst way they could go. They're not only COMPLETELY ignoring what they did(after stating that they DON'T do it), but they're accusing FutureMark of intentionally trying to make them look bad. That's an awfully big gauntlet to throw down, I hope Futuremark responds to the accusation quickly and decisively.
 
Russ:

That's reasonable. If the drivers can rip a shader apart, and reassemble it on the fly to be faster than it was before, that's great. It's a valid optimization imho. If on the other hand, there is a static hardcoded code replacement, it's not. It well never be happening for anyone elses code, and certainly not for *my* code.

If nvidia or ati wants to talk with futuremark, and make the claim that they have DX9 code that is faster than thier own and isn't the result of re-ordering, or loop unrolling, or any other kind of dynamic optimization, then I personally think that it's up to the application developer, not the hardware vendor to implement the algorithm. Replacing bits of code for specific applications in the driver is at best a hack, at worst cheating like in 3dmark.

Nite_Hawk
 
nelg said:
BTW just seen this at HotHardware

"Since NVIDIA is not part in the FutureMark beta program (a program which costs of hundreds of thousands of dollars to participate in) we do not get a chance to work with Futuremark on writing the shaders like we would with a real applications developer. We don't know what they did, but it looks like they have intentionally tried to create a scenario that makes our products look bad. This is obvious since our relative performance on games like Unreal Tournament 2003 and Doom 3 shows that the GeForce FX 5900 Ultra is by far the fastest graphics on the market today."

Oooops!

Now, there's no name on the nVidia source here, but if they stand be this we could well be heading into some nasty legal battle pretty soon.

There's a subtle but very important differences between saying that a widely accepted industry benchmark doesn't reflect how real game are made and flat out saying that it is biased against the market leader and thus useless. In other words: Futuremark has just come under an attack that could be lethal for them. :oops:
 
I don't suppose anyone could post some comparison screenshots for the GeforceFX under the unpatched 3dMark03 and patched? GT4 and the PS2.0 tests would be nice. :) It'd be interesting to see if there are any quality differences during the normal course of the benchmark (in addition to geometry not being clipped out, of course).
 
Moose said:
Nite_Hawk said:
HardOCP's reply is on their front page. They blame futuremark.

Nite_Hawk


...removes Hardocp from list of favorites.....

Shame, sometimes they have a good scoop. [H]ardocp is gone off my list as well. I never agreed with their "benchmarking right" argument though.
 
Joe DeFuria said:
If FM can "defeat" the optimization by say, adding ONE MORE nop...and the optimization fails....then the original optimization is not a very robust solution, and may as well be application specific.

I agree. But consider: Suppose that whatever change you make breaks the optimizer (I'd presume due to poor testing). If you were changing the shader to try to expose 'detection' routines did you just expose a detection routine? Or did you break the optimizer?

From outside the box, you can't tell.

Now, lets assume it was a bug in the optimizer and next week a driver comes out that fixes it. From the outside, you can't tell--it just looks like they've gone and developed a new detection routine that hasn't been exposed yet. I'm not saying this is what's going on, but it is a possibility.

It could be that ATIs shader optimizing technology is just plain better than NVIDIAs, so they don't show up as being 'detected' (because they do a better job of re-ordering or don't need to)

Different chipsets have particular 'happy modes' of useage. One may not mind texture changes but hates blending state changes; the other might be the other way around, a third may be agnostic to those two items, but have an issue with something else entirely. All of these modes are completely valid under D3D--if Futuremark changes from grouping by one method to grouping by another, you could change the performance numbers. Who's to say which way was right? The one where all three suffer the least? The one where product A is shown in a good light, but the other two suffer? The one where Product B and C perform well, but A performs abysmally?

And that's sort of what I'm getting at with the re-ordering of instructions, etc. Simply because the chosen change affects one board more than the other negatively, doesn't necessarily mean that its a lesser product. It may just mean that you've chosen a very unoptimal path and aren't showing it in the best light. A different change might have had completely opposite results.

And, that, partially, is what i percieve to be NVIDIAs complaint: they own a large portion of the market and if 3dmk3 is rendering things in a mode that shows their product in a bad light, in which re-ordering would make a large positive difference, are they (3dmk3) doing a disservice or not? Would an engine writer necessarily take the path that 3dmk3 did knowing a large portion of their target market would be ill served? Does that make the benchmark a valid means of comparing of real world performance?

But, of course, that's all philosophical speculation on my part.
 
It is indeed disturbing how Hothardware without any hesitation buys the argument of Futuremark's anti-nVidia agenda and goes on to say that they hope nVidia and Futuremark could sort out their differences, AS IF that were the issue here. Sad.

Also, the repeated references to the purported high cost of beta membership is pretty lame. As if it had any meaning to a company like nVidia.
 
Nazgul said:
nelg said:
BTW just seen this at HotHardware

"Since NVIDIA is not part in the FutureMark beta program (a program which costs of hundreds of thousands of dollars to participate in) we do not get a chance to work with Futuremark on writing the shaders like we would with a real applications developer..."

Essentially, NVidia is saying that the only way you are going to get shaders that work quickly and correctly on their hardware is if NVidia helps you write them. I guess small developers or anyone who wants to use shaders from an off-the-shelf library are out of luck targeting NVidia products.
 
Hellbinder[CE said:
]
After reading over the whole situation, if ATI is indeed replacing the shader with a version more suited to it's architecture, even if it produces the exact same output, I gotta cry foul. If this were a game, the benchmark run would have used content that existed in the game, and thus optimizations like this would enhance the overall game performance as well. Therefore, one could argue that the "optimization" did not unfairly skew the benchmark result. In an ideal world, ATI would be able to work with the game developer to get the optimized shader into the game as an ATI-friendly rendering path.
I have to agree. Replacing a shader routine imo is exactly the same thing as inserting clip planes.

Further imo this does provide more evidence that 3dmark03 is not a well written, sound, Benchmark. If Even ATi are secretly rewriting code that makes better sense, Then imo ATi+Nvidia = Futuremark makes NO SENSE.

This is a complete turn around for me, but I think the evidense is pretty compelling.

I think exactly the opposite regarding 3dmark (and futuremark) is true. It is the ONLY benchmark which is actually being checked for driver cheats. Other game benchmarks are not being checked for cheats at all because there is no time and money for that with the developers of the games. The fact that Futuremark has detected this, has made them MORE valid as a benchmark.

Who knows what other benchmarks out there might be being cheated?
All we know is that 3dmark scores are clean now.
 
Xspringe said:
All we know is that 3dmark scores are clean now.

I think all we know is Futuremark has made an attempt to prevent cheating. We can guess it went well, but we don't KNOW for sure the scores are clean.
 
Back
Top