Uttar said:
Okay, so let me get this clear...
That's ALL there is to it?
There better be more, otherwise the hype about those cheats is vastly unjustified
Vastly unjustified?! Uttar, the concern is not the cheat method being excessively complicated, but that it achieves an unrepresentative comparison.
This is a very disappointing post on several levels. Not because of your cheating analysis, but the proposition you start off with that you are attempting to support with it.
Please consider a quote from the article:
Extremetech said:
If this were a "3D Guided Tour" and the goal was to make the scene render as quickly as possible, with no regard to any real-world performance correlation, then this type of optimization would be fine. But that's not the goal of 3DMark. It's a synthetic benchmark that measures performance, designed to indicate how well a 3D GPU can render DX8/DX9 game code. In a game where you have six degrees of free movement, with a user controlled (vs. fixed) camera path, static clip planes would not work at all.
This is not a general case optimization, and seems to have been progressively worsening concurrent with performance increases. The trend and indication has been established already, and the improper representation has already been done. The "hype" is not "unjustified"...nVidia is in the position of having clearly shown that their optimizations have been taking inappropriate shortcuts,
in a clearly dedicated way that is uniquely dependent on "rail" demo playback. That much seems clearly established. Tell me, how do you accidentally optimize in such a way to benefit a gamer's experience? The goal of the optimization matters, Uttar, and this set of optimizations is for benchmark results that influence buying decisions but do not represent what the user will get in actual game play.
What is
not being established is the actual general case performance of their cards in 3dmark 03, and this is discussed exactly in the article as well, by how the 3dmark 03 performance results do not represent general case, even other shader limited ones. The problem here is that this misrepresentation is achieved intentionally by nVidia, and is not the fault of 3dmark 03 except inasmuch as not being completely cheat proof.
What is established, and what is not, is the problem, because nVidia proposed the first as the second. That seems to be a rather blatant deception.
Let's face it, the sky problem WILL be gone in the next driver revision. And I mean, really gone. It's so darn fricking easy to fix it isn't even funny anymore. As I explained in another post, it should be possible to *cache* a DIP call and draw it after everything else. And then, it's perfectly undetectable.
The problem is the special case nature would still make it unacceptable.
Prior view on the topic.
However, it is conceivably possible to cache and evaluate draw order as a general case optimization, but the problem here is that we don't have the performance results for just that being used to represent the product, we have what
is already established to be done instead. It is your dependence on "they can do something different" and "only this many % performance increase, I'd guess" to support that this
doesn't matter that is uniquely disappointing.
The buffer clearing problem surprises me most however. From the looks of the screenshot, it seems like they're still clearing Z, but not color. Odd, considering the NV3x got Fast Color Clear. So yes, they're cheating, but I'd be surprised if they got more than a 3% boost from that.
OK, but your surprise wouldn't change that they're cheating.
Oh, sure, there may always be other things, but I guess I was right. I'm not particularly shocked by all this. The performance boost they're getting is probably lower than 10%, which would indicate they might still be able to be on par with the Radeon 9800.
The problem
is having to guess by how much they misrepresented, Uttar.
While it does proof nVidia is cheating, and it would indeed be better if they stopped, this ain't THAT much when you think about it.
I don't think that is an objectively valid statement.
And heck, anyone drawing the sky before everything shouldn't expect GPU companies not to cheat IMO. It's just too unoptimal.
That's not the only cheat, Uttar, and if it can successfully be done in the general case, or is openly presented as an option (discussed in the article, as well...a balanced piece, IMO EDIT: hmm...except that the Quack mention, while necessary, was significantly incomplete), it would indeed be more valid.
But that's "coulda, woulda, shoulda, if, but, maybe", Uttar, and still leaves the rest of what they did.
Hasty example:
You purchase a tool that says it offers a certain specification of...let's say...torque and rpm.
You come to find out that you can't use it well for tasks that should be within the capabilities of its rated spec.
The reason for this is because the manufacturer used a motor that offered the rated spec for specific circumstances only, ones that are not realistic or representative of any common usage.
Do you blame yourself for not having the tools to immediately find out that the rated spec was not correct?
The store for allowing the tool to have that spec printed on the box?
The testing lab for not having enough power to dictate that everyone must adhere to their standards for specifications?
The manufacturer for circumventing the standards in ways that other manufacturers do not, in order to state a specification that is inflated?
This manufacturer has dedicated considerable resources to having people do the third.
You seem to propose anything but the 4th is suitable, and in fact the 4th is "vastly unjustified", even in view of what this manufacturer is dedicating resources to convince people to do.
Why is that?
EDIT: Fixed link.