ExtremeTech Article Up

Reverend said:
Uttar said:
1. This isn't as big of a deal as the forums, news sites, ... seem to indicate - although it's still important.
It is a big f**king deal, whether ET came up with this article or not, or whether forums heat it up. It challenges every preconceptions we may have had about reading and trusting reviews.

(...)
This really isn't about how X is rendered before Y or whther a game is following an optimized route. It's about what a IHV can do wrt a benchmark with a fixed camera mode as the basis of its benchmark. You don't appear to grasp the gravity of the situation.

Exactly my thoughts, my concerns - well said.
 
BoardBonobo said:
I think we should just ditch 3DMark as a benchmark. It's just a glitzy way of showing off what your shiny new card can do, and how good the driver teams are at optimising for it. Nothing more and nothing less.

It allows flagrant abuse to occur and if it is going to come back as a serious application then some more thought needs to go into how it actually performs and scores it's benchmarks.
Anything that runs on a rail - demos, flybys, etc. - could have this kind of attack performed upon it. Is Quake3 a bad benchmark because demo1 runs on a rail?
 
Dio said:
Anything that runs on a rail - demos, flybys, etc. - could have this kind of attack performed upon it. Is Quake3 a bad benchmark because demo1 runs on a rail?
Potentially, yes -- and especially since people have been using the same Quake 3 timedemos for benchmarking for years. IMHO, hardware sites need to create their own timedemos.
 
Reverend said:
It is a big f**king deal, whether ET came up with this article or not, or whether forums heat it up. It challenges every preconceptions we may have had about reading and trusting reviews.

Well, right now, it only challenges 3DMark scores.
IF it was proved nVidia used similar cheats in other things, such as Serious Sam timedemos and Codecreatures, then it would become a fricking, damn huge deal. Then I'd be really disgusted.
If they could apply such a thing to real games too and get similar performance boosts, then I'd be delighted. But it is 100% obvious those cheats are specifically for static paths, so it's obviously impossible.
Thus there'd be no way I could be happy about this.

However, considering it only influences 3DMark and some major websites didn't even use it during their NV35 review, I wouldn't make it that much of an issue. That is, as I said, unless it was applicable to other timedemos too - then it'd become truly disgusting.


Uttar
 
Uttar said:
Well, right now, it only challenges 3DMark scores.
IF it was proved nVidia used similar cheats in other things, such as Serious Sam timedemos and Codecreatures, then it would become a fricking, damn huge deal. Then I'd be really disgusted.
If they could apply such a thing to real games too and get similar performance boosts, then I'd be delighted. But it is 100% obvious those cheats are specifically for static paths, so it's obviously impossible.
Thus there'd be no way I could be happy about this.

Uttar

Well lets put it this way, take a 5800 Ultra..run ShaderMark, 3Dmark and this high dynamic range demo : http://www.daionet.gr.jp/~masa/rthdribl/index.html

If is getting 8-10 fps in these Synthetic tests, if a DX9 developer uses PS 2.0 with that card, what can he do to increase the performance ??
The only way is to optimize (cheat) cleverly, and if its being done here to increase performance and it for sure being done in games...especially anything requiring PS 2.0.
Carmack gets decent speed by using vendor specific paths in OGL.
 
Uttar said:
...
Any serious game isn't going to render it before everything else. Saying it's supposed to stress the GPU makes no sense, because this is supposed to be a GAME test, not a stress test!...

It is supposed to be a stress test...that's the purpose of a benchmark. Stress tests are good, if they stress important aspects and stress them properly. And what it is trying to stress primarily for the scoring tests is the GPU. I continue to be puzzled as people maintain otherwise.

A game test is applicable to a game, and its algorithms. You can call it a game benchmark, but it can easily not be a good video card benchmark. Benchmarking functions in games are both a name recognition tool and a leveraging of the game development effort to being a datapoint representing the particular demands of that one game's goals (a win/win situation).

A (dedicated) benchmark is applicable to what is being stressed...it needs to aid in isolation of the applicable components. 3dmark is a gamer's stress test for video cards (as far as the scoring tests)...it stresses the functions of video cards used for 3d games, and tries to focus on a wide array of representation as far as that goal.

The name "gamer's benchmark" is a marketing term that does not mean it is a game benchmark or a game test, or it would be reduced in applicability to the scope of that one game goal. The name does cover the full spectrum of the particular components tested...but please note that the sound card tests (relevant to gamers) and the CPU tests (also relevant to gamers) are not the main focus (everything uses the CPU, and sound usage is far simpler than 3D graphics...it doesn't warrant as much focus) and don't contribute to its scoring system, and that therefore the game tests concentrate on 3D graphics. That's why it is called 3DMark.

Why do you maintain that testing the GPU is invalid? :oops:
 
Uttar:
IF it was proved nVidia used similar cheats in other things, such as Serious Sam timedemos and Codecreatures, then it would become a fricking, damn huge deal. Then I'd be really disgusted.

Why would Codecreatures disgust you? its no more a real game than 3dMark
 
Archaeolept said:
Uttar:
IF it was proved nVidia used similar cheats in other things, such as Serious Sam timedemos and Codecreatures, then it would become a fricking, damn huge deal. Then I'd be really disgusted.

Why would Codecreatures disgust you? its no more a real game than 3dMark

I think nVidia has succeeded in stigmatizing 3dmark for many people. The mindset currently encouraged is "anything but 3dmark" and the technical validity of that propostion seems to not matter at all.

This is my opinion on the reason for that list...I think the campaign against 3dmark results in calls for an Open Source benchmark being repeated so commonly (without addressing that an Open Source benchmark might still be as predictable, and even without the ability to step outside of "rail" benchmarking without being re-written), the rarity of complaints against Code Creatures usage as a benchmark, and the expectation (without any supplied information that it would be immune from this) that the next Aquamark would just be "better".

I also think it is an extreme shame that, to my perception, it works on so many people to varying degrees.
 
Actually, it is true I don't have a very high view of 3DMark. But I don't particularly consider Codecreatures or Aquamark better anyway. I generally prefer either 100% synthetic benchmarks ( a fullscreen quad with a 96 instruction long PS2.0. program, for example ) or 100% real-world benchmarks ( that is, real games )

I'm saying I'd be disgusted if they did it in Codecreatures too because the more scores are falsified, the more of a problem it is. That's why with one benchmark being falsified, I don't find it ridiculously problematic - that is, unless proof arises that other known benchmarks would use similar cheats.

My reaction would be the *exact* same if it was Codecreatures where cheating was oberved and it's 3DMark that "might have been infected but don't know for sure".

Also, I insist on my POV that 3DMark "sky before everything" is not acceptable.
Of course 3DMark's goal is to test the GPU! I personally see it as a benchmark which tries to emulate a GPU-limited game.
But my point is that this just doesn't make sense. If they wanted to stress the GPU, they could use more instructions for the sky instead of always drawing all of it. Or they could put more detail in other things. There are a billion of great ways to stress the GPU - this just isn't one of them.

Heck, let's imagine a GPU which would be much faster than the competition when using 50+ PS intructions. Wouldn't it be defavorised by 3DMark trying to use less instructions on more pixels instead of using more instructions on less pixels? Of course, such a GPU doesn't exist, but I believe more ATI & nVidia agree that less more complex pixels is the way of the future, so 3DMark doesn't really respect that vision there.

I'm not saying 3DMark is using too many small programs. Heck, for that, their benchmark is quite good & balanced IMO. But in the specific case of the sky, there are much better ways to stress the GPU.

Now, maybe I'm all wrong and Futuremark is doing this for a good reason, such as simplification of a specific effect. I'm unaware of any such thing, however.


Uttar
 
Uttar said:
I'm saying I'd be disgusted if they did it in Codecreatures too because the more scores are falsified, the more of a problem it is.

Well, isn't that obvious?

When is the line crossed from being "OK" to "not acceptable?" One, two, 5 benchmarks?

What if the ONE benchmark that is being cheated on is likely in the top 2 if not the very top relied on one for sales?

That's why with one benchmark being falsified, I don't find it ridiculously problematic - that is, unless proof arises that other known benchmarks would use similar cheats.

I find that very short sighted, and morally objectionable.

My reaction would be the *exact* same if it was Codecreatures where cheating was oberved and it's 3DMark that "might have been infected but don't know for sure".

My reaction would be quite the opposite, because code-creatures, lat time I checked, wasn't anything in the same leage as 3DMark in terms of industry acceptance. (And by industry I mean the customers of ATI and nVidia...the OEMs.)

Also, I insist on my POV that 3DMark "sky before everything" is not acceptable.

Actually, what you or nVidia "thinks" of 3D Mark is completely irrelevant and no excuse. They had their say as part of the Beta program. Just because THEY disagree with some methodology does not give them the right to cheat, any more than it gives nVidia the right to implement an ARB OpenGL extension as they see fit, if they disagree with the ARB's decision on how to implement it.

If nVidia doesn't like it, they can create their own benchmark (like they can create their own extension), and try and sell that.
 
Uttar said:
Reverend said:
It is a big f**king deal, whether ET came up with this article or not, or whether forums heat it up. It challenges every preconceptions we may have had about reading and trusting reviews.

Well, right now, it only challenges 3DMark scores.

I don't quite think you are seeing the true implications here.

If this is a cheat, and lets be honest it seems too coincidental for it to be anything but, then the amount of work and effort that has to go into it is pretty huge. They have probably got some poor fool sitting there and analysing every frame to see when the buffers need to be cleared and what frames can have clip planes inserted into them. This type of thing takes a lot of work and if they are willing to invest it in a benchmark they publically slate who knows what they are willing to do on the rest out there. The problem is that no benchmarks, even game benchmarks, are safe from this type of optimisation - there are numerous ways to detect an application other than from the .exe string, from vertex formats to texture sizes and uploads and all kinds of other methods. The simple fact of the matter is that NVIDIA have more money and resource to invest in this type of thing than any of the others out there.
 
I have a hard time objecting to any supposed cheating in 3DMark.

3DMark is a useless benchmark to start. "Cheating" can't make it any less useful.

The way I see it, whether the benchmark shows the GeForce FX in far too negative a light (by using shaders that the FX is particularly bad at), or the benchmark shows the FX in too positive a light, it's just as invalid.
 
Chalnoth said:
I have a hard time objecting to any supposed cheating in 3DMark.

3DMark is a useless benchmark to start. "Cheating" can't make it any less useful.

The way I see it, whether the benchmark shows the GeForce FX in far too negative a light (by using shaders that the FX is particularly bad at), or the benchmark shows the FX in too positive a light, it's just as invalid.
You can say and honestly believe that (and I do, too) because you have enough experience with computers to know that performance increases in a synthetic benchmark does not necessarily translate to performance increases in games. However, consider that there is a good portion of consumers who are largely affected by 3dMark scores in determining their purchse. Also, realize that NVidia would not have done this if they truly did not believe 3dMark scores would affect consumers' choices.
 
being unethical is very bad for a company and nVidia is very unethical in how they do things. knowing you from nVnews.net you would quickly jump all over ati if the position was reversed
 
Chalnoth said:
I have a hard time objecting to any supposed cheating in 3DMark.

3DMark is a useless benchmark to start. "Cheating" can't make it any less useful.

This is a startlingly useless statement. Let's examine the criteria of support offered for it...

The way I see it, whether the benchmark shows the GeForce FX in far too negative a light (by using shaders that the FX is particularly bad at), or the benchmark shows the FX in too positive a light, it's just as invalid.

The set of shaders is not "shaders that the FX is particularly bad at", it is shaders representing and highlighting performance to the DX standard. If the FX is particularly bad at that standard, that is the fault of the GeForce FX!
The way in which 3dmark is showing the GF FX in "far too negative a light" is by highlighting its actual shortcomings in meeting the applied standard compared to other architectures....that's the job of a benchmark. :oops: Can't you simply accept that the NV3x cards have performance problems, and that it is possible that they have significant comparative performance issues?

By your reasoning, a benchmark is only valid when everything is represented to your liking...but the job of a benchmark is to represent things fairly and accurately to the set standard. Your problem appears to be that the standard is one that doesn't suite the GeForce FX. :!:

It's not ChalnothMark! :-?
 
I have the drivers(44.03) installed now and am getting the 'no clip' artifacts in the sky of SeriousSam(first one, not SE). I have a screenshot if anyone can host it.
 
Back
Top