New 3DMark03 Patch 330

Myrmecophagavir said:
In addition, the drivers are replacing code written by the developer without their knowledge. What's the point of programmers slaving away over their code when it's going to be over-ruled by the drivers? It's as if the companies are admitting their developer education programs haven't worked and they can still do a much better job than the devs.

Well, I think this point has been made previously by others, but I think that if code can be automatically transformed by a driver into a more efficient but equivalent form, then that's just fine. Actually, I could even consider it desirable -- but it would have to be something that can be controlled by the developer or end user for obvious reasons.

I don't see much difference between that and, say, a compiler performing various types of optimization on code.

Now, the problem arises when the code is transformed into something which is *not* equivalent... And that appears to be the case here, which IMHO can be safely labelled as cheating.
 
Ok, now Blue's News jumped in proclaiming that there's a "feud" between Futuremark and nVidia, quoting the same "We didn't want to pay them money so they made the benchmark perform bad on our hardware" allegation by nVidia.

Sheesh. Increasingly, the problem is not nVidia's PR dribble, it's incompetent web journalists who can't see the facts even when they are given to them as point-by-point explanations in clear English.
 
PurplePigeon said:
Well, I think this point has been made previously by others, but I think that if code can be automatically transformed by a driver into a more efficient but equivalent form, then that's just fine. Actually, I could even consider it desirable -- but it would have to be something that can be controlled by the developer or end user for obvious reasons.

I don't see much difference between that and, say, a compiler performing various types of optimization on code.

Now, the problem arises when the code is transformed into something which is *not* equivalent... And that appears to be the case here, which IMHO can be safely labelled as cheating.
I'm all for instruction re-ordering and similar optimisations if it works in general, on all input shaders. But I was specifically referring to recognising a particular shader known to come from a specific app and replacing it entirely by one of the driver writers' own design, even if it always produces the same output. There's nothing commendable about a compilation process that does that, whereas the general case is fine.
 
It just looks to me as though NVidia is going to try and bluff their way out of the problems they've gotten themselves into.

Logically, I can't see any reason why Futuremark would deliberately want to fall out with the largest video chip manufacturer around. That is, unless you believe they are being Machiavellian to get their own back on NVidia for pulling out of the 3DMark development programme. This would seem to be contra to everything their business stands for.

Therefore, I believe that NVidia are just trying to squirm their way out of poor decisions made about the cheating/'optimisation' of the drivers for 3DMark. NVidia are sticking to the old adage:

"If you throw enough mud around some will stick".

My worry is that NVidia will do a 'Microsoft'. They have such power in the market that they might be able to out-stare Futuremark or at least ride the storm enough to discredit them. DR-DOS, anybody?
 
Is not going to happen with the powerful Beta Partners Futuremark have, including OEMS like Dell. Nvidia is the only one getting mud thrown in their face here.
 
nelg said:
Funny how Nvidia has not responed to ET, FutureMark or Beyond3D but has done so to other outlets.

It's quite predictable though...because ET, FutureMark and to a lesser extent B3D have all presented the "evidence" that needs to be responded to. So for nVidia to respond to them, means responding to the evidence. Which, apparently, they don't want to address.

Care to guess why? :)

Instead, all they can do is "respond to the allegation", and do so without actually addressing the direct evidence.
 
My early take (from Page 5, haven't caught up yet):

The NV35's 330 score is most likely using floating point. The NV30's 330 score is most likely not. The 330 score for the NV35 would then be a really big improvement, because it is actually following the spec, though just looking at the number would not tell that story. If benchmarks bear out similar results between the cards, this would be in accordance with the distinction I recall of the 44.03 drivers being WHQL for the NV35 only...the "register combiner" usage would only be up to DX 9 spec for that particular card (out of those released now).

It seems clear that nVidia is cheating on a large scale, in a way that doesn't seem compatible with the term optimization at all.

It seems clear that ATI is doing a "bad optimization", i.e., a benchmark specific one. Given the architecture, it seems easy to believe that this is absolutely identical in output. But it would still remain that ATI needs to improve their low level parser to handle the optimization opportunity to do this "legally" within Futuremark's rules. Kudos for the incentive to do this being provided by Futuremark's standards.

I.e., it is quite easy for me to believe it is a 100% completely legitimate optimization done in a 100% completely illegitimate way for 3dmark 03. What they did wrong would then not be lying about their hardware, but lying about their current drivers' ability to adapt low level shader code to it. I'd be curious for a technically detailed explanation of whether this is beyond what ATI can accomplish, or whether they have plans to introduce improved analysis in the future.

Call it a cheat? For 3dmark, it is at current, despite that such optimizations might be quite easy to deliver for games through developer relations, even with absolutely identical output and workload. The problem for ATI would be that 3dmark's definition of identical workload seems to preclude application specific detection.

Call it the same thing as what nVidia is doing? I don't think there is any way to justify that by any rational means, as it seems clear that the workload is drastically altered by any remotely reasonable metric by the 44.03 drivers.


Let the battle be between hardware and runtime shader analysis by the drivers for each vendor, so the benefits can be universally delivered to all DX shader applications. Thumbs up to Futuremark for working to make sure that their application facilitates IHVs delivering that. :D
 
nVidia has made no legitimate response because there is none to make. It is their intention and apparently they believe their God-given right to cheat and lie to whomsoever they can in their ill-fated effort to try and convince people they have recaptured the performance crown they lost last August and obviously have yet to regain.

I mean, can they honestly expect people to say:

"Awwwww, poor nVidia! Isn't it sad how that great big gorilla FutureMark is jumping all over their scrawny, underfed, malnourished carcass? Isn't it so sad how FutureMark kicked little nVidia out of its beta program and now poor little nVidia has no 'tools' so that it can tell the difference between cheating in 3D Mark and not cheating? Awww-w-w, poor little nVidia--they just get a raw deal all the way around from FutureMark--and since they quit the program they don't have anyone to HELP THEM STOP CHEATING. Aw, it's so sad!"

Yea, right!...;) Listen to the violins play! *guffaw*

nVidia is fast digging its own grave with this crap. It will be a long, long time before I'll believe a word the company says about anything it makes--I won't be buying it. nVidia is too stupid a company to read the FutureMark indictment and make an intelligent response. The company seems capable of doing nothing more than making Kyle-esque personal attacks on tiny companies like FutureMark which it doesn't bother to attempt to support with evidence of any kind.

"Market leader," my rear end--they're already half-way to being a has-been and it certainly looks like they're not bright enough to have figured that out, either.

I can't decide which is more unbelievable--nVidia's cheats--or nVidia's comments on being found out. Incredible.
 
What I don't get is this talk that 3Dmark03 is not a legit benchmark. For any fixed camera demo, can't you do the same cheats?

If Nvidia has the right to do massive cheats to try and discredit 3Dmark and stop it being used (since it can be hacked, not real world blah blah blah), what is to stop any company from busting any benchmark they don't like? If ATI doesn't like the way DOOMIII runs on their cards, can they go and dump as many cheats as they want and say it's not a legit benchmark and it shouldn't be used?

Man, we'll end up with two totally different set of benchmarks (OGL for Nvidia and D3D for ATI) and no way to compare the two. Futuremark has to stick to it's guns and reviewers have to use it because if we allow the card makers to pick the demos they want it's the end of any useful review.

T.
 
Firstly HardOCP who?? I removed them from my list after the last debacle(can't remember what it was I think Intel/AMD but can't be sure). So sorry to Brent.

Nvidia have lost alot of my support now. Firstly with the NV30 and now with this. My next card will definitely be a ATI. I mean even Valve prefer the 9800 Pro for HL2(The Gamespot clips had Gabe talking about the 9800 Pro's Feature's).

I'm not interested in getting Doom3, firstly 'cause it's only Single player, but also because it seems that I'll have to upgrade to something rediculous in order to play it, whereas HL2 will require alot less(although the ATI 9800 Pro seems like a fine choice for me atm).

I always was a Nvidia fan, but now I think it's time to change for the best.
 
Unknown Soldier said:
Firstly HardOCP who?? I removed them from my list after the last debacle(can't remember what it was I think Intel/AMD but can't be sure). So sorry to Brent.

^_- Any chance that Beyond3d would have a place for Brent? He has done some nice, nonbiased reviews over at [H]ard... I think it is a shame for him to be there rather than a more deserving site... on the other hand... at least the reviews he does curb the damage that [h] ends up doing to its readers...
 
tontoman said:
What I don't get is this talk that 3Dmark03 is not a legit benchmark. For any fixed camera demo, can't you do the same cheats?
Not necessarily the same cheats, but yes.
 
I think Anand needs Brent more than B3D does. But by that token, it would be [H] itself that needs him most of all. ;)
 
Heh, I agree with Pete. I think Brent's article focus is a bit divergent from B3D, and also that we need good examples for that focus out there. People express opinions on his articles all the time here, but I don't think we should be trying to tell him where to go.

The issues with [H] will progress as they will, and I'm sure he and his work can speak for themself if that results in there being a need.

Oh, and I find Pelly willing to listen and learn so far, and I'll mention that because no one seems to be making a distinction with him as well.
 
In all fairness, Kyle is out of town right now, he left last night

so he probably doesn't know about this stuff yet
 
demalion said:
My early take (from Page 5, haven't caught up yet):

It seems clear that ATI is doing a "bad optimization", i.e., a benchmark specific one.

When you catch up, what's your take on the 4X AA result [p. 10 of this thread] that was lower with build 320?

Now that ATI has admitted optimising for 3DM03, was it a poorly conducted optimisation (for not working when AA is on)? Or did ATI intentionally optimise for non-AA speed (for FM ORB), allowing AA speed to suffer? Or was it an optimisation just to render 3DM03 in a smarter way, not directly shooting for an increase in the score (hence non-AA happened to get faster, AA happened to get slower) but just doing it smarter like they'll optimise in patches for the most popular games?

I'm confused.

Can somebody with a R9800P repeat HotHardware's AA tests?

***

It'll be interesting to see how fast ATI gets all the specific stuff outta there and whether Nvidia has time to optimise for build 330 before 3DMarkJune03 appears :LOL:
 
Brent, while I'm among the crowd that appreciates your work, I nevertheless think "Benchmark Slayer" from a [H] crew member is slightly prone to misunderstandings just now :LOL:

/joke
 
Was gonna say something about bandwidth and the 9800 scores, but then realized shouldn't have an effect.

But while looking at hothardware, saw something else...

I'm assuming the numbers at hothardware are correct. That means with 4xAA and 4xAF and using patch 330, the 5900 is still slightly faster than the 9800 Pro.

5900U - 3098
9800P - 2940

Anyone else see that or did some smoke from the people in the apartment downstairs drift up into mine? lol
 
Back
Top