New 3DMark03 Patch 330

martrox said:
Bottom line is nVidia is GUILTY, ATI may be guilty but, ATM, that's not proven... and there is a pretty good chance that ATI will not be proven guilty.....

And what evidence leads you to assume that? I'm not saying I know something but I think it's way, way too early to make such assumptions.
 
martrox,

Where did i say that ATI was guilty of cheating? I said that there's a drop only in GT4 of more or less 8% and that it could be optimisation or cheating. May i quote myself?

Evildeus said:
You will also noticed that Ati was also doing some optimisations/cheats
 
John Reynolds said:
And what evidence leads you to assume that? I'm not saying I know something but I think it's way, way too early to make such assumptions.

It's clear to me / has been proven "beyond a reasonable doubt" that SOME of the nvidia "optimizations" are in fact cheats. For example, the cheats that involve knowing the position of the camera. (Clip planes / some buffer clears). The fact that these cheats rely on a fixed camera path make it a 95% certainty that it is a deliberate cheat. The fact that these cheats turn on based on some sort of app / scene detection, moves that to 99.44%

Replacing shader code with other code that does not EXACTLY reproduce results, is also cheating. It is even "highly questionable", to replace shader code with different code that DOES produce the same exact results, though I could at least see an argument for that not being cheating.

I will repeat what I said earilier. This does not mean that I assume that ATI's optimization is a cheat (or not.) Nor does it mean that I assume EVERY nVidia optimization is a cheat. (There can be legitimate optimizations that rely on detection, IMO.)

But in those specific cases at least, It has now been proven to me that nVidia has cheated.
 
I guess what we are going to need here is a better idea of just what is considered a cheat vs. an optimisation. And it's up to FM to establish this and present it to the IHV's. Does Daves supposition qualify as a cheat?

And, IF FM decides that what ATI has done is a cheat, on what scale should we compare ATI and nVidia? Is what ATI has maybe done compare to what nVidia has done? I really don't think that the two can even be compared....
 
martrox said:
I guess what we are going to need here is a better idea of just what is considered a cheat vs. an optimisation. And it's up to FM to establish this and present it to the IHV's. Does Daves supposition qualify as a cheat?

Well, I think FM is pretty clear about it: If you need to "detect" something (application, scene, shader), then consider it a cheat.

I think that's a valid position to take for a synthetic benchmark.
 
Evildeus said:
martrox,

Where did i say that ATI was guilty of cheating? I said that there's a drop only in GT4 of more or less 8% and that it could be optimisation or cheating. May i quote myself?

Evildeus said:
You will also noticed that Ati was also doing some optimisations/cheats

optimisations/cheats...... what's that look like to you, ED? Looks like the word "cheats" to me..... And, at this point, can you admit that nVida is cheating?......
 
I am glad to see the policing of this benchmark is keeping a level playing field. The consumer and gamer don't need to mislead, and I thank Futuremark and its BETA partners for doing the right thing.
 
Joe DeFuria said:
martrox said:
I guess what we are going to need here is a better idea of just what is considered a cheat vs. an optimisation. And it's up to FM to establish this and present it to the IHV's. Does Daves supposition qualify as a cheat?

Well, I think FM is pretty clear about it: If you need to "detect" something (application, scene, shader), then consider it a cheat.

I think that's a valid position to take for a synthetic benchmark.

I have to agree with it, then....my bad......
 
martrox said:
I guess what we are going to need here is a better idea of just what is considered a cheat vs. an optimisation. And it's up to FM to establish this and present it to the IHV's. Does Daves supposition qualify as a cheat?

We're very clear in this matter. In fact it's all documented in 3DMark help files and lisence agreement. Benchmark specific optimizations (i.e. ones which detect that 3DMark is running and change the way things work) are not allowed during a benchmark run.

We have a long (4-6 months) specification phase for each benchmark that we release. Once the benchmark is released the data sets may not be changed in order to ensure apples-to-apples comparisons for different hardware. This is a policy that we've had in place since we started working with benchmarks since 1997 so I think we've been pretty consistent with this point.

The objective of 3DMark is not to find out who writes most efficient shaders and can replace them in drivers during a benchmark run. Should that be our goal we'd do an open source benchmark or construct the whole product completely differently.

Cheers,

AJ
 
I am disapponited if it is true that ATi did something on one of the tests to increase its score--especially since it was so unnecessary.

Here are my '03 scores for my 9800P (128mbs) clocked at 445MHz, stock-configuration card (I run the card continuously @ 455MHz/365.x without difficulty):

3.2 = 5860

3.3 = 5776

Running the 3.4 Catalysts, running 3D Mark '03 in its default configuration, FSAA and AF set to Application Preference in the Control Panel. My cpu is an Athlon XP 2000+ Thoroughbred, motherboard is a Chaintech 7njs (nf2 chipset running latest bios and 2.41 nVidia drivers, exluding nVidia audio drivers.) 1 gig PC2100 ram running at 133MHz (2x512.)

I am not surprised to see the score some ~1100 points higher than the $500 nv35 reference-design review cards from nVidia, when the benchmark is rendered by both products correctly. Moreso, it is obvious why nVidia decided to cheat--although certainly deplorable.

I also find it hypocritical in the extreme that nVidia would publicly denigrate the benchmark, resign from the 3D Mark beta program, but yet be able to recognize the marketing potential of 3D Mark to the degree that the company felt taking a chance on cheating it was justifiable in order to boast inflated scores which it felt would stimulate sales of the yet-to-ship product. Well, it flipped a coin and lost. Here's hoping nVidia will recognize that not only do its products need work, but so do its ethics.

I think that the variation in the ATi scores is so slight that there might be a simple explanation as to what they did--although I cannot condone a driver "recognizing" a benchmark test and altering its behavior, even slightly. The interesting thing is that whatever ATi did wasn't enough to push the difference in score beyond a statisitical norm variation of + or - 3%. Certainly nothing approaching nVidia's ~24% + differential, which is of course well outside normal variation possibilty.

Kudos to FutureMark for handling this as they have done as the issue is now settled beyond a reasonable doubt.

Edits: typos
 
retard.gif
 
Joe DeFuria said:
martrox said:
I guess what we are going to need here is a better idea of just what is considered a cheat vs. an optimisation. And it's up to FM to establish this and present it to the IHV's. Does Daves supposition qualify as a cheat?

Well, I think FM is pretty clear about it: If you need to "detect" something (application, scene, shader), then consider it a cheat.

I think that's a valid position to take for a synthetic benchmark.

I am not sure about this approach. 3dmark is made so that it approximates the game performance. We all know that games normally have specific codepaths for specific hardware. Each GPU ( nv/ati ) does certain things a lot better for others. So normally a game dev(with guidance from an IHV) would optimise for these things to get the maximum performance possible.
Now 3dmark does most of these things in a fixed way (or the dx9 way) and doesnt have a specific codepath. So this in effect makes its performance not correspond with that of what that particular gpu is fully capable off. So hence you might see optimisations from IHV's to show truly what their gpu is capable off in 3dmark. They just cant say "to hell with 3dmark" since a lot of people and OEMs use it to judge a gfx product.
The question here is what is the line between such optimisations and cheating. Should futuremark also use the approach of game devs and have specific codepaths to remove all doubt from everyones mind ?
 
OK...I'm am definately wrong here for saying ATI may not be guilty......
I am now supping on a small black bird that makes "caw" sounds.....

Edited to make sense......hehe!
 
Evildeus said:
Joe,
I think John was speaking of that:
and there is a pretty good chance that ATI will not be proven guilty.....

Yes, I was. Thanks for pointing that out. . .I wasn't being clear enough even with the added italics.

And, for the record, I'm not saying ATi is guilty. I'm only saying let's not assume anything yet.
 
What happens to the output if you make very slight (like make the water red) changes to these detected shaders and re-run?
 
Back
Top