New 3DMark03 Patch 330

There's a subtle but very important differences between saying that a widely accepted industry benchmark doesn't reflect how real game are made and flat out saying that it is biased against the market leader and thus useless. In other words: Futuremark has just come under an attack that could be lethal for them
Um.. how bout its the otehr way around. Like Futuremark could sue Nvidia for Slander. After all its pretty clear that NVidia is outright cheating. Nvidia are the ones that should be looking out after making statements like that.
 
I have to agree. Replacing a shader routine imo is exactly the same thing as inserting clip planes.
I am actually reevaluating this position.

If it were a game then it would be more than desireable for the shader work to be tweaked for the hardware. Thus, i wonder if it is really that big a deal if Ati, Nvidia or anyone else Tweaks the shaders to the hardware on an application basis.

IMO it makes for a more True, real world benchmark enviornment.

However, inserting clip planes is another story...

(PS i am still rethinking this and forming new ideas on this whole subject. So im open to everyones logical input before i personally come to a real conclusion)
 
IMO for a benchmark like 3dmark03:

Replacing shaders is not ok
Lower image quality i.e. reducing LOD, lower precision is not ok
Doing specific optimization is not ok i.e. not clearing the screen or whatever it was :p and clipping planes

Basically the work load should be the same for ALL competiting products

However I believe that if for example a driver had a couple of different presets which worked more efficiently with different work loads that it would be ok to use app detection to set the driver to proccess the code more efficiently.

A good example of this is wickedgl, if I remember correctly that had 2 versions one that was optimized for high res and one for low res. Now would it be unfair for a driver to detect whether a program was using settings which would benefit from its driver "high" setting or more from its "low" setting :?:

Edit :high and low not reffering to image quality but just different ways of proccesing code
 
Thanks nelg for the HotHardware pointer!

Noticed a very interesting thing in their results:

R9800P, Cat 3.4, no AA, 320 -> 330 build: 5834 -> 5747
R9800P, Cat 3.4, 4X AA & 4X AF, 320 -> 330 build: 2899 -> 2940

AA performance went up in build 330 with Cat 3.4!

For the record the unsurprising part:

FX5900U, Det 44.03, no AA, 320 -> 330 build: 6067 -> 4886
FX5900U, Det 44.03, 4X AA & 4X AF, 320 -> 330 build: 3749 -> 3098
 
RussSchultz said:
I agree. But consider: Suppose that whatever change you make breaks the optimizer (I'd presume due to poor testing). If you were changing the shader to try to expose 'detection' routines did you just expose a detection routine? Or did you break the optimizer?

In either case, nVidia is full of hypocricy.

If "all that happened" was "breaking an optimizer", then by definition, this would make 3DMark an asset. Because by nvidia "fixing" (making more robust) their optimizer to handle the next case (build 330), that would clearly have benefits beyond 3DMark.

But nVidia has originally argued that they can make "optimizations" that don't have any real impact except for 3DMark...and therefore, it's just a waste of resources.

nVidia can't have it oth ways.

Now, lets assume it was a bug in the optimizer and next week a driver comes out that fixes it. From the outside, you can't tell--it just looks like they've gone and developed a new detection routine that hasn't been exposed yet. I'm not saying this is what's going on, but it is a possibility.

I agree that's a possibility...but then that scenario makes 3DMark an asset and real value not only to consumers, but to nVidia by exposing "real weaknesses" such that they can be fixed. So nVidia shouldn't be complaining.

And, that, partially, is what i percieve to be NVIDIAs complaint:

I perceive their "complaint" as being largely hypocritical. Futhermore, their complaint is now not really open to perception...they are complaining that FM is deliberately trying to make them look bad. :rolleyes:

(FYI...That rolleyes was for nVidia...not you, Russ!)

Does that make the benchmark a valid means of comparing of real world performance?

Here's a matrix-esq question for you: define "real world performance". I hear that term bandied about by many nVidia sympathizers.

What should 3DMark performance "reflect", in your opinion, to be representative of the "real world?" I contend that HONEST (non cheating scores) in 3DMark in part DEFINES what the "real world" performance is for these parts.
 
Hellbinder[CE said:
]However, inserting clip planes is another story...
Inserting clip planes into a racing game, or a flight simulator, or a tank simulator, or something like that doesn't seem like its necessarily a bad thing.
 
RussSchultz said:
Xspringe said:
All we know is that 3dmark scores are clean now.

I think all we know is Futuremark has made an attempt to prevent cheating. We can guess it went well, but we don't KNOW for sure the scores are clean.

Reasonably clean might be a better wording. Doesnt change my point, though :)
 
Joe DeFuria said:
What should 3DMark performance "reflect", in your opinion, to be representative of the "real world?" I contend that HONEST (non cheating scores) in 3DMark in part DEFINES what the "real world" performance is for these parts.

To some degree, vendor specific paths, if necessary. If eveybody knows that on hardware A you should group X,Y, then Z; hardware B you should group Y, Z, then X. Then maybe you should have two paths if both parties can benefit?

Or, conversely, if product A suffers when doing X,Y,Z and does much better when doing Y,X,Z--and product B does ok either way, wouldn't the real world performance be best represented by chosing YXZ? Granted, not all games would do it that way, but NVIDIA (for example) does a pretty good job of evangializing what best fits their hardware.
 
dan2097 said:
IMO for a benchmark like 3dmark03:

Replacing shaders is not ok
Lower image quality i.e. reducing LOD, lower precision is not ok
Doing specific optimization is not ok i.e. not clearing the screen or whatever it was :p and clipping planes

Basically the work load should be the same for ALL competiting products

agreed.

Though we really don't exactly know yet what ati is doing in gt4 but if the replace shader guess come true , it's definitively also cheating IMO.

After reading the audit i was :oops: i mean 8
 
RussSchultz said:
To some degree, vendor specific paths, if necessary.

But that's what we get from some game benchmarks. And that's on a game-by-game basis, as it should be.

3DMark has DX9 specific paths. It is purposely not vendor specific, because it is a synthetic benchmark.

If eveybody knows that on hardware A you should group X,Y, then Z; hardware B you should group Y, Z, then X. Then maybe you should have two paths if both parties can benefit?

What if everybody knows at Hardware A should group X,Y, and Z, and don't use W at all...but hardware B doesn't care about WXYZ groupings?

Should the benchmark not use W? Should Hardware B in effect be penalized for being more robust? If Hardware A performance relies on more eductaion / evangelation from the IHV, is that not a "real world detriment" to that product?

Or, conversely, if product A suffers when doing X,Y,Z and does much better when doing Y,X,Z--and product B does ok either way, wouldn't the real world performance be best represented by chosing YXZ?

Again, on a game by game basis.

Granted, not all games would do it that way, but NVIDIA (for example) does a pretty good job of evangializing what best fits their hardware.

And it's up to nVidia to evagelize that to FutureMark, as they do to game developers.

That doesn't mean FM or the game developer will agree.
 
Hellbinder[CE said:
]I am actually reevaluating this position.

If it were a game then it would be more than desireable for the shader work to be tweaked for the hardware. Thus, i wonder if it is really that big a deal if Ati, Nvidia or anyone else Tweaks the shaders to the hardware on an application basis.

IMO it makes for a more True, real world benchmark enviornment.

However, inserting clip planes is another story...

(PS i am still rethinking this and forming new ideas on this whole subject. So im open to everyones logical input before i personally come to a real conclusion)
It would be desirable if the companies informed the developers when they could produce a more efficient version of the shader. But it's not fair to silently slot it in when run on their cards only - developers want their programs to run as well as possible on all supporting hardware. If only one company was doing it users of other companies' cards would lose out, so the morally good step would be to let everyone enjoy the benefits.

In addition, the drivers are replacing code written by the developer without their knowledge. What's the point of programmers slaving away over their code when it's going to be over-ruled by the drivers? It's as if the companies are admitting their developer education programs haven't worked and they can still do a much better job than the devs.
 
Joe DeFuria said:
That doesn't mean FM or the game developer will agree.

Well, there's the crux of the problem. I'm of the opinion the game developer will agree--since he wants his engine to run as fast as possible on as many cards as possible.

FM may take the opinion that you do: "why penalize the other product for being more robust".

But that doesn't make it more representative of what the real world is, now does it?
 
But that doesn't make it more representative of what the real world is, now does it?

Again, you are supposing a definition of "real world".

The fact that some developers may cater to CERTAIN high-profile IHVs, doesn't mean that in the real-world, their hardware has limitations that won't, and in some cases can't be worked around.

What about a brand-spanking new company that comes to market with a quirky architecture.

In your "real world", because the hardware is not popular, and therefore devs probably won't optimize for it much if at all, THAT hardware shold gets treated differently (should NOT get hand optimizations in synthetic benchmarks), just because it's not popular in the "real world?"
 
Bjorn said:
Tim said:
No it has not. This quote only addresses the issue of replacing shadercode and it does not contradict what I wrote. Replacing code is not rendering what the developer intended even if it looks exactly the same and I agree that would be cheating especially in synthetic benchmarks.

But you said:

There are no image quality degradation with Ati´s drivers and they have no problems with the free lock mode. Other than the performance improvement there seems to be no difference, this indicates that it is optimizations not cheats. (Ati could off cause be using some kind of free mode detection).

And? These two quotes are in no way contradicting each other. I never said that "no image quality degradation"=no cheating.
 
Joe DeFuria said:
In your "real world", because the hardware is not popular, and therefore devs probably won't optimize for it much if at all, THAT hardware shold gets treated differently (should NOT get hand optimizations in synthetic benchmarks), just because it's not popular in the "real world?"

That about sums it up. Its hard to translate that into rules, but I do think that some sort of consideration should be given to market leaders.
 
RussSchultz said:
I'm of the opinion the game developer will agree--since he wants his engine to run as fast as possible on as many cards as possible.

To be clear, I'm of the opinion that I don't know if the game developer will agree, as it depends on how much of their own resources it takes to support "every single card as optimally as possible."

Yes, Devs want their code to run as good as possible. But they don't have unlimited resources to do this for all hardware. And nVidia isn't going to care much unless it's a high profile game.
 
Joe DeFuria said:
Yes, Devs want their code to run as good as possible. But they don't have unlimited resources to do this for all hardware. And nVidia isn't going to care much unless it's a high profile game.

There's a ton of information out there for a developer that wants to optimize for either ATI or NVIDIA. Some of the stuff overlaps, some is vendor specific. You don't have to be a tier one to get access to it, and I think that developers do cater to the majority of the market.
 
RussSchultz said:
That about sums it up. Its hard to translate that into rules, but I do think that some sort of consideration should be given to market leaders.

Again, your "rules" are already reflected in actual real-game benchmarks. 3DMark is not a real-game benchmark and quite deliberatley has a different purpose.

What you are saying, is that 3DMark should be "yet another timdemo benchmark." This would just be redundant.

That wouldn't add any more value to the benchmarking scene.

BTW....who is the "market leader" for DX9 parts, anyway? Seems to me even by your logic, ATI should be getting the "preferential treatment", if anyone, in the game test 4 and PS 2.0 tests....
 
Back
Top