nVidia vs Futuremark Continued - Guess what nVidia's doing!

For Dx IHV's work very closely with MS during the specification of the API, so nothing that ends up in the API should really be a surprise, some discussion relating to PS2.0 had started before Dx8.1/PS1.4 had seen the light of day.

John.
 
John, that all is certainly true.

But following a spec doesn't mean that all items are created equal--unless performance measurements are included in those specs.

The fact of the matter is, you can accomplish the same thing several different ways using fully compliant DirectX code. Some paths will favor one set of hardware, some the other. Particularly, the way you group primitives makes a large difference. (for example 3dfx hardware had no problem with lots of texture changes and every other piece of hardware did. Its perfectly legal code to change the texture all the time--but it isn't the best way to go for good performance)

There is no shining path (not even in Peru) that is dictated by Microsoft--simply a set of APIs and expected rendered output. All of the recommendations and performance suggestions don't come from microsoft, but from competing IHV advocates who are attempting to make sure the world uses the path that best suits their hardware. Some of them are universal, others are more vendor specific.
 
JohnH said:
For Dx IHV's work very closely with MS during the specification of the API, so nothing that ends up in the API should really be a surprise, some discussion relating to PS2.0 had started before Dx8.1/PS1.4 had seen the light of day.

John.

You're right in an ideal world, but how often do you see ATi and nVidia pulling in the same direction?

If you look over the last few generations of hardware/DirectX, I don't think ATi and nVidia have ever both gone in exactly the same direction, and both companies have suffered from changes to the DirectX spec brought about (directly or indirectly) by the opposing IHV.

For example, I seem to remember the original Radeon having Vertex shader 1.0 support, only to find that the DirectX spec was changed to 1.1 at the last minute. Then of course, there was Pixel Shader 1.4 which nVidia seemed none to keen on, and most recently the omission of partial precision from the initial DirectX 9 release despite nVidia wanting it included.
 
RussSchultz said:
The fact of the matter is, you can accomplish the same thing several different ways using fully compliant DirectX code. Some paths will favor one set of hardware, some the other. Particularly, the way you group primitives makes a large difference. (for example 3dfx hardware had no problem with lots of texture changes and every other piece of hardware did. Its perfectly legal code to change the texture all the time--but it isn't the best way to go for good performance)

And thats why there's a range of tests in 3dmark, to test a range of rendering techniques with different elements. Its not just doing one test or one technique.
 
JohnH said:
The idea behind a standard API is that you code your HW to work well with it.
I won't argue with you on this because it's true.

Note that I never said that the default test should be anything other standard API rendering paths. I only said that it would be interesting to see the results (performanceand IQ fidelity, the latter of which should have the standard API path as the basis for comparison) of IHV specific paths. And I never mentioned that IHV-specific paths would, could or should contribute to officially acceptable scoring. Maybe we can have different scores but with the standard API path's score as the only acceptable one?
 
Reverend said:
...
I think I said in some thread that I wished FM had implemented what I suggested to them when I read their audit report draft, which was to not use the word "cheat", which should be left to the discretion and decisions of the public (and websites), because I recognized the potential legal danger... but the audit report was scheduled to be public within the next day after I read the draft, so things were already in motion before what I had to say had any, er, importance. I even gave FM my edited draft, substituting all instances of the word "cheat" with what was actually going on. Although I respected the fact that this was entirely in FM's hands in what they choose to call it officially ("cheat" or statements of facts about what's actually happening).

The result is not one which I wished for, for all parties concerned.


Rev, I never had a doubt that the company was split as to how to handle this matter. Indeed, had the ones who wrote the cheat audit report been the ones to make the decision, the company would not have folded like a house of cards simply on account of threats nVidia legal was making during the bluff stage. As you know, no law suit was ever filed of any description, so nVidia got what it wanted through a simple bluff. Despite what some individuals in the company may have wanted to do, they were overruled by the people in the company who wanted to fold. And so, FM folded at the first hint of opposition from nVidia--it folded on merely the threat of legal action--that's all it took. The official actions the company took had nothing to do with the opinions of people within the company opposed to giving up so easily. I certainly agree with them--but that is not what the FM company did--it folded on the basis of threats--no suits were filed.

I disagree with your opinions about "cheating" versus "optimization." The differences in the English language are clear and profuse. When nVidia chopped down the benchmark workload by implementing clip planes, unrendered frame segments, etc., they were clearly "cheating" and not "optimizing" at all. I would not characterize everything nVidia did as a cheat relative to the benchmark, but these things are clearly cheats--the word "optimize" is a gross misdefinition when applied to them.

For that reason I applaud the brave souls within the company who had the backbone to stand up for their software and defend it against this kind of wilful manipulation and attack--the ones who drafted the original audit report. It's just too bad they weren't the ones to make the final decisions (obviously.)
 
RussSchultz said:
John, that all is certainly true.

But following a spec doesn't mean that all items are created equal--unless performance measurements are included in those specs.

Making something that is in a spec perform well would seem to be a good idea, concentrating on performance of something that isn't would seem no to be counter intuitive, unless you have some other agenda. Through a standard API you can only create a valid comparison using the features of that API, if there's no generic way of using a non standard feature to enhance performance then it probably shouldn't ever be benchmarked against.

The fact of the matter is, you can accomplish the same thing several different ways using fully compliant DirectX code. Some paths will favor one set of hardware, some the other. Particularly, the way you group primitives makes a large difference. (for example 3dfx hardware had no problem with lots of texture changes and every other piece of hardware did. Its perfectly legal code to change the texture all the time--but it isn't the best way to go for good performance)

Thing is, many optimisations apply to all HW, or if they don't benifit they don't actually hurt. ps1.1 vs ps1.4 complicates the whole paths favouring specific vendors argument but ultimatley these are standard API paths.

There is no shining path (not even in Peru) that is dictated by Microsoft--simply a set of APIs and expected rendered output. All of the recommendations and performance suggestions don't come from microsoft, but from competing IHV advocates who are attempting to make sure the world uses the path that best suits their hardware. Some of them are universal, others are more vendor specific.

I don't disagree with this, as I said above many optimsiation are good for all vendors, and generally fit within a defined standard. However, if someone starts trying to push something that isn't in that standard as the way forward, is it really a good idea ?

My main "beef" with this whole episode is that it doesn't help the industry move towards a good solid standard, which isn't good for anyone.

John.
 
Hanners said:
JohnH said:
For Dx IHV's work very closely with MS during the specification of the API, so nothing that ends up in the API should really be a surprise, some discussion relating to PS2.0 had started before Dx8.1/PS1.4 had seen the light of day.

John.

You're right in an ideal world, but how often do you see ATi and nVidia pulling in the same direction?

If you look over the last few generations of hardware/DirectX, I don't think ATi and nVidia have ever both gone in exactly the same direction, and both companies have suffered from changes to the DirectX spec brought about (directly or indirectly) by the opposing IHV.

For example, I seem to remember the original Radeon having Vertex shader 1.0 support, only to find that the DirectX spec was changed to 1.1 at the last minute. Then of course, there was Pixel Shader 1.4 which nVidia seemed none to keen on, and most recently the omission of partial precision from the initial DirectX 9 release despite nVidia wanting it included.

Being pedantic, if "vendor specific code paths" was changed "Standard Version N code path" then I see no problems with various vs/ps versions. Dx9 ps2.0 is a very clean standard, parts from both major vendors support it, so whats wrong with using it ? Why do we need to be pulled towards something else ?

Partial precision was in the original Dx9 release, why do you think it wasn't ?

John.
 
digitalwanderer said:
I agree that FM shouldn't have used the word "cheat" in the first document. They could have avoided giving nVidia any ammunition to use against them in the first place and it wouldn't have been hard to get the gist across without using that particular word with all it's various legal implications. :(

As I did with the Rev, I have to disagree. It seems to me the word "cheat" perfectly and accurately describes what nVidia did in certain things relative to the benchmark, and to that end I think this is why the people who drafted the report ignored the Rev's exhortations not to use the word. There was probably no other mono-syllabic word that would better or more accurately have described it.

Sure, they might have said "illegal optimization," possibly--but, let's face it, you cannot under any conceivable frame of reference call the insertions of clip planes (and the rest of the similar things they did--only visible to the camera when it's off the track) "optimization" of any type. It was done specifically and expressly to reduce the benchmark workload to that lower than the default the benchmark mandates, simply to achieve higher scoring, and thus to present the illusion that the hardware was running the default worlkoad faster than it could in fact do so.

Definition of optimize is: make as perfect, effective, or fundamental as possible.

Definition of cheat is: act of deceiving.

The words are not even close to being synonymous. Indeed, this entire sad event has done little more than to butcher the English language as opposed to clarifying anything. Cheat is certainly the approriate word to have used, as the original audit report demonstrated beyond doubt, IMO.
 
Reverend said:
JohnH said:
The idea behind a standard API is that you code your HW to work well with it.
I won't argue with you on this because it's true.

Note that I never said that the default test should be anything other standard API rendering paths. I only said that it would be interesting to see the results (performanceand IQ fidelity, the latter of which should have the standard API path as the basis for comparison) of IHV specific paths. And I never mentioned that IHV-specific paths would, could or should contribute to officially acceptable scoring. Maybe we can have different scores but with the standard API path's score as the only acceptable one?

But don't you think the presence IHV specific paths leads to confusion in BM's ? As you say, maybe if the results are generated in an un-reportable form, no idea how you'd do that, you know how things innevitably get massaged!

John.
 
WaltC said:
I disagree with your opinions about "cheating" versus "optimization." The differences in the English language are clear and profuse. When nVidia chopped down the benchmark workload by implementing clip planes, unrendered frame segments, etc., they were clearly "cheating" and not "optimizing" at all. I would not characterize everything nVidia did as a cheat relative to the benchmark, but these things are clearly cheats--the word "optimize" is a gross misdefinition when applied to them.

Huh? I never said that I saw "cheats" as "optimizations" (or vice versa). I said that I saw the potentially hazardous (business-wise) legal implications if FM used the word "cheat" (officially) in their audit report and I suggested that it may be better to leave out the word "cheat" in all instances actually used in the official audit report, and leave it to the public/websites to determine if such were "cheats". I suggested that the audit report should stick to reporting the facts ("NVIDIA drivers are inserting clip planes") instead of using the word "cheat" ("NVIDIA drivers are cheating by inserting clip planes").

You misunderstood me. I called what NVIDIA drivers are doing as cheats, like so :

Rev via email said:
Personally, if FM wants to call this a "cheat" and mention it as many times as they have in this draft, they are entitled to it -- this is clearly a matter of declaring their position and views on this issue (and very clearly FM views this as "cheating", regardless of any other alternative word to describe it) and as a party who is no more than a beta-member, B3D really shouldn't be trying to influence FM's view of this by word substitutions.

However, my position is the same as FMs -- they're cheating, there are no two ways about it.

For that reason I applaud the brave souls within the company who had the backbone to stand up for their software and defend it against this kind of wilful manipulation and attack--the ones who drafted the original audit report. It's just too bad they weren't the ones to make the final decisions (obviously.)
These kind of folks exists everywhere. Even folks that sit on the BOD of any one company.
 
JohnH said:
But don't you think the presence IHV specific paths leads to confusion in BM's ? As you say, maybe if the results are generated in an un-reportable form, no idea how you'd do that, you know how things innevitably get massaged!
There can be no confusion if what I suggested is implemented and the person presenting the analysis in a publicly-readeable form knows exactly what is going on.

Will you doubt any reviewer that benches DOOM3 and presents his results in some media form? How do you know he didn't make a mistake and presented charts that really are ARB paths instead of ATI or NV paths (or vice versa)?

The fact is that Carmack et al is doing it because the options are there. Benchmarking DOOM3 should be no different than benchmarking something like 3DMark03... because "benchmarks" of any kind, no matter what they were originally perceived as (a game or an actual benchmark) are used for one reason -- to determing what card runs something (game, benchmark, whathaveyous) best given the choices available and these "benchmarks" (game or synthetic benchmark) presented in the media and are hence almost universally accepted as a basis for purchasing decisions.

For someone working for a IHV, you don't need something like 3DMark03 to tell you if your particular piece of hardware is performing well in a "standard API test"... you can make your own benchmarks I'm sure.

I don't disagree that a standard exists and that it should be followed. As long as someone knows that something is performing with results XYZ running the standard way of doing things, there should be no confusion if other options are available and exercised. It is the application's duty to inform, and inform in a very distinct way, why there are such options and which option is the one that matters for the very purpose of that application's existence.
 
Reverend said:
Huh? I never said that I saw "cheats" as "optimizations" (or vice versa). I said that I saw the potentially hazardous (business-wise) legal implications if FM used the word "cheat" (officially) in their audit report and I suggested that it may be better to leave out the word "cheat" in all instances actually used in the official audit report, and leave it to the public/websites to determine if such were "cheats". I suggested that the audit report should stick to reporting the facts ("NVIDIA drivers are inserting clip planes") instead of using the word "cheat" ("NVIDIA drivers are cheating by inserting clip planes").

You misunderstood me. I called what NVIDIA drivers are doing as cheats, like so :

Rev via email said:
Personally, if FM wants to call this a "cheat" and mention it as many times as they have in this draft, they are entitled to it -- this is clearly a matter of declaring their position and views on this issue (and very clearly FM views this as "cheating", regardless of any other alternative word to describe it) and as a party who is no more than a beta-member, B3D really shouldn't be trying to influence FM's view of this by word substitutions.

However, my position is the same as FMs -- they're cheating, there are no two ways about it.


Heh-Heh...;) I guess we've misunderstood each other...I didn't think that you agreed that the things nVidia did were "optimizations" instead of cheats--heck, I read your comments out here at the time it happened and you were very clear about your position that they had in fact cheated.

Where we disagree is in the fact that I think it's poor communication to use several words to say what you can say with a single word...;) I think you are putting too much emhasis on the word "cheat" which is only used a few times in the audit report--by contrast there are hundreds of words in the report other than "cheat" which illustrated why it was perfectly accurate to use the word "cheat" in the first place. IE, FM did not allege a cheat without proof; it stated a cheat had occured and the bulk of the report was dedicated to proving it. The strength of the cheating pronouncement lay not in the use of the word "cheat" but in all of the other words the audit report used to prove it.

I think that had FM simply said "nVidia is cheating our benchmark" and left it at that--I would agree with you. But since 98% of the audit report was dedicated to illustrating the problem with words other than "cheat" I think calling a spade a spade here was A-OK.

Take for instance the fact that although the wording has been changed to "application optimization" nobody on any web site anywhere has concluded, "Oh, well, nVidia wasn't cheating after all," have they (intelligent web sites, of course...;)? So if FM had left out the word "cheating" it would never have stopped anyone else from using it, right? That's because the strength of the argument comes from all of the other words apart from "cheat" that FM used in its audit report. I think they were right to use the word "cheat" because they proved their point beyond question. I would agree with you had they not been able to do so.

These kind of folks exists everywhere. Even folks that sit on the BOD of any one company.

But my point is only that regarding this particular case it was unfortunate that the controlling interest in the company didn't stick to its guns. The problem is that whatever nVidia erroneously believes it may have accomplished by slapping FM around--it has not accomplished anything that I can see. The issues raised by the audit report remain and have not been resolved. nVidia has yet to coherently acknowledge them. Thus, whether FM elects to malign the word "optimization" or not--it's still wide public knowledge (among the group that cares) that nVidia did indeed cheat.

nVidia is the only one with the power to change that perception looking toward the future--and that will only happen when the company stops running from the issue and addresses it cogently. As long as nVidia continues to cheat the issue will dog the company relentlessly, I believe.
 
John, you're completely missing my point.

Code that is completely valid and to the spec in DirectX, PS2.0, or whatever can perform quite differently on two different architectures.

To repeat: I'm not talking about vendor specific instructions, or anything non-standard or that even approaches going outside the specification.

In that case, what is the "vendor neutral" path to use?
 
It's called 'divide & conquer' & the best way to do that is from w/in 'the fold'. ;) NV seems to know that lesson well. "Keep your friends close & your enemies closer". :devilish:

FM is a business intent on making $$$ so the 'top dogs' can buy new cars, homes, saunas, etc. Keep that in mind & all will become self-evident, IMO. :rolleyes:

You're being used for your 'services' B3D, & when those become un-needed (or unwanted) you'll be relegated to a 'silent' BETA partner, IMO. I think you're seeing that already. You & ET were brought in as 'the muscle', & now that that there is a 'truce' ... ;) The fact that the word 'cheat' was used should have opened your eyes on that > you weren't asked into the BETA program for your expertise on legal wording issues, but your expertise on discovering 'cheats'. Since there are no more cheats ... ;)

My worthless .02,
 
just me said:
It's called 'divide & conquer' & the best way to do that is from w/in 'the fold'. ;) NV seems to know that lesson well. "Keep your friends close & your enemies closer". :devilish:

What about your *customers*, though?...;) This is what I think nVidia should be concerned about--not the FM employees.

FM is a business intent on making $$$ so the 'top dogs' can buy new cars, homes, saunas, etc. Keep that in mind & all will become self-evident, IMO. :rolleyes:

Which won't happen if their customers lose faith in the credibility of their products...right? This is what FM should be pondering.

You're being used for your 'services' B3D, & when those become un-needed (or unwanted) you'll be relegated to a 'silent' BETA partner, IMO. I think you're seeing that already. You & ET were brought in as 'the muscle', & now that that there is a 'truce' ... ;) The fact that the word 'cheat' was used should have opened your eyes on that > you weren't asked into the BETA program for your expertise on legal wording issues, but your expertise on discovering 'cheats'. Since there are no more cheats ... ;)

My worthless .02,

I think the guys at B3d can manage OK for their problems whatever they may be are not those of either nVidia or FM...;)
 
WaltC said:
Which won't happen if their customers lose faith in the credibility of their products...right? This is what FM should be pondering.
Who are the biggest clients ;)? Ati and Nv cards makers? 8)
 
Reverend said:
JohnH said:
But don't you think the presence IHV specific paths leads to confusion in BM's ? As you say, maybe if the results are generated in an un-reportable form, no idea how you'd do that, you know how things innevitably get massaged!
There can be no confusion if what I suggested is implemented and the person presenting the analysis in a publicly-readeable form knows exactly what is going on.
Of course given a uniform level of intelligence and integrety among reviewers this works fine, otherwise anything can be quoted out of context.

Will you doubt any reviewer that benches DOOM3 and presents his results in some media form? How do you know he didn't make a mistake and presented charts that really are ARB paths instead of ATI or NV paths (or vice versa)?
Well I'd certainly question anything that is at odds with my expectations, other than that see above comment.

The fact is that Carmack et al is doing it because the options are there. Benchmarking DOOM3 should be no different than benchmarking something like 3DMark03... because "benchmarks" of any kind, no matter what they were originally perceived as (a game or an actual benchmark) are used for one reason -- to determing what card runs something (game, benchmark, whathaveyous) best given the choices available and these "benchmarks" (game or synthetic benchmark) presented in the media and are hence almost universally accepted as a basis for purchasing decisions.
Even JC has stated that he'd prefer to use a single path for all HW. You'd have to ask him, but I guess he was only forced down the the multiple paths route due to the lack of a ratified standard at the time he was doing most of the dev work. Most ISV's you ask will say they'd prefer a single path.

For someone working for a IHV, you don't need something like 3DMark03 to tell you if your particular piece of hardware is performing well in a "standard API test"... you can make your own benchmarks I'm sure.
Its very easy determine how individual parts of an API will perform on your HW, however interactions resulting from all the various ways an ISV may combine those feature often uncover inefficiencies in the way the HW is being driven (by the driver). Basically the more (well written) benchmarks are available to us the better.

I don't disagree that a standard exists and that it should be followed. As long as someone knows that something is performing with results XYZ running the standard way of doing things, there should be no confusion if other options are available and exercised. It is the application's duty to inform, and inform in a very distinct way, why there are such options and which option is the one that matters for the very purpose of that application's existence.
Agree.

John.
 
RussSchultz said:
John, you're completely missing my point.

Code that is completely valid and to the spec in DirectX, PS2.0, or whatever can perform quite differently on two different architectures.

To repeat: I'm not talking about vendor specific instructions, or anything non-standard or that even approaches going outside the specification.

In that case, what is the "vendor neutral" path to use?
I don't disagree that you can get surprising performance differences on different HW from a valid peice of code. However, often the reasons why something runs badly on acrhitecture A vs B is that the code is actually doing something stupid and fixing it for B will make A run faster as well. A good example of this is locking the back buffer is very bad for a TBR but, although not good, isn't so much of a problem on an IMR, but remove the lock and both go faster. The fact is most sensible optimisation work across all architectures.

This said, where architectures offer a widley different set of features it probably isn't possible to generate a vendor neutral code path that gets the best out of all HW. So I guess we end up with specific paths, but they should be selected, where possible, by exported capabilities not device ID.

To be honest my main concern is at a lower level, where fo example ISV's appear to be being forced to generate different vs and ps code paths for different devices that supposedly exposed the same capabilities. This is nothing but bad for the industry and only moves us further away from the justifiable goal of unified code paths...

Could probably go on about this indefinitley, but need to do some work!

Later,
John.
 
Evildeus said:
WaltC said:
Which won't happen if their customers lose faith in the credibility of their products...right? This is what FM should be pondering.
Who are the biggest clients ;)? Ati and Nv cards makers? 8)

Heh-Heh...;) Good point. But what I meant is that if the general 3D-chip-buying public no longer believes the benchmark has any merit ATi & nVidia both will tire of the program as they will be achieving nothing with it.
 
Back
Top