ExtremeTech Article Up

I think there are several ways for nVidia to address this consistent with their past behavior:

  • Concentrate on an instance of cheating in their competitors.
    The most effective tactic that occurs to me would be to bring up Trident and/or SiS, and then implicate their significant competitor (ATI) by mentioning Quack and/or implicating some characteristic of drivers, real or implied (with the recent 3dmark 03 performance increase in the Cat 3.4, it is possible that ATI may have started following nVidia down this road to compete in the "big number" race, so depending on whether that is the case nVidia may have atleast something minor to magnify for distraction and to place the burden of proof on someone else).
    The success of this depends on their creativity and what they are able to find for distraction.
  • Blame 3Dmark 03.
    This seems to be their set course, followed by Kyle Bennet quite in particular in a rather disappointing contrast to his behavior with Quack. It disgusted me that I defended him on Quack to only find out that nVidia indeed aimed him with the program and the specific scenes with the affected textures to represent as universal, and it disgusts me that he displays this inconsistency with an even more dramatic indication right now, and even more dramatic displays in the recent past. Also, the rather personal and prevalent forum editing at the HardForums seems to be focused on placing the validity of points behind such criteria as "flamebait", "trolling", and "linking without providing attribution", and thread closing at the determination of Kyle...seems to lend itself to purging strong disagreement, by placing an onus of time usage and effort, while still leaving avenues for purging open for concepts defined solely by Kyle.
    The problem is that Kyle has already established he values his preconceptions and bias (apparently a personal bias, rather than a financial one, the only redeeming factor I know of) over objective presentation, and his final stance will depend on the demands of his self perception and ego in that regard (i.e., he might come out against nVidia, but he'll persist in his own concurrent agenda while doing it).
    (I wonder if Kyle will stop by today? I'd enjoy a discussion at length with him once I get back from my doctor's appointment.)
    The only direct response to this (i.e., if it is the main factor of success on nVidia's part), with the eagerness of people to be led in what to believe, is to highlight occurrences of this in games, and things like asking about nVidia's penchant for providing timedemos (performance figures for the NV30 launch, trying to provide the demo for the Doom 3 showing, etc).
    Since they seem to have atleast HardOCP already applying a double standard between IHVs, this seems like their most successful avenue based on what their past behavior indicates they are willing to do.
  • Find another method of performance increase that isn't so damning.
    It doesn't have to be hard to figure out, only less visible, and maintain appealing benchmark results.
    With the time delay in print magazines, this could easily be enough to FUDify exposure there, which is where the impact would be most damaging. Might not work too well with Ziff Davis publications, but then again there is some financial pressure to apply to encourage that. A journalistic integrity ethic could easily defeat this, depending somewhat on whether they try honey or vinegar for the FUD, but some publications don't have much apparent (what was the name....Steve...something-or-other...? :-?).
    Might also not work too well on the internet, but the integrity ethic for each site depends on relatively few people.
    Might easily work for television (I'm thinking Tech TV, for example, might conceivably cover this), since the medium is more ephemeral, and allows more "base covering" with regards to saying "something" about it, but softly. This also depends on personal integrity.
    This works out well from their perspective, depending on personal integrity, since, aside from Ziff Davis (and to my knowledge), most big news companies are not directly focused on this market to this level of focus and technical savvy. Really boils down to how they handle them.
  • Come clean, or atleast appear to.
    This is their most effective tactic in combination with one of the above, if they are willing to do so. Depends on ego, really, since it seems to be me the most obvious way to try and control the spin of the fallout from this. There is a whole bunch of creativity they could apply to leverage atleast superficially doing this with other approaches to deflect the actual brunt of loss of trust.
    There is, of course, the possibility that they are "clean" wrt this issue. I just don't think my observation of recent history leaves much room for that.

It is only doing the last by itself that I see as having any possibility for my evaluation of "class" (actually, I'd term it "sanity") being manifested.
This last is actually my view of what ATI did with "Quack"....in actuality the reduction in detail was only for specific textures (though the perception of HardOCP's presentation lends itself to think otherwise :( ), the performance gain was primarily from engine specific (and not just timedemo specific) optimizations that are still in use (but now not applied by executable name), rather than the reduction of detail in that texture.
 
Ichneumon said:
OpenGL guy said:
Reverend said:
1) Run a game benchmark using a recorded gameplay demo
2) After running the game benchmark, play the game at the same scene as the recorded demo and check if the benchmark results translate into real gameplay results, with the reviewer varying his POV both slightly as well as wildly, to check for inconsistencies
You can simplify this by just creating custom demos for each game you want to benchmark. It would make results between sites hard to compare though.

I had just been about to post the same thing myself.

While it makes result comparisons difficult, reputable reviewers/sites can always have their own set of demos for the games they bench, which they do Not make publicly available.

Heck, they can have a one or more they don't make available, and a one or more they do or whatever... then anything like this would become rather obvious as any performance anomalies due to "optimizations" with the publically available demos would be readily apparant against the in-house demos.

While I definately understand Rev's take on things, it seems a little on the extreme side of looking at it... reviewers just need to have their own set of demos that they know well which they use in conjunction with other more readily available demos so they can keep their finger on performance gains vs. anomalies.
Well then, the question now is who are the reputable/respected reviewers and does *everyone* agree with such an illustrious list? :)

Sure, I can record a game demo to use for benchmarking and also make it available for download publicly, while also record another demo with minute changes to POVs compared to publicly available demo, the latter to verify that no IHV has messed around. But to do so is just a terribly depressing thing to think about, that we the reviewers have now has to do this. The trust is lost.

Would anyone oppose the idea of using my own recorded demos for benchmarking in video card reviews that I won't make available for download? Would anyone question me? Or Dave? How important is it that these demos are to be made publicly avaialble for download, for the public to verify my own results? Should I have to point out to someone who doesn't know about this NV/3DMark03 incident (it's possible... they could be new to the Internet) the reason why I can't/won't provide him with such recorded game demos if he asks me for it? Should every review be precipitated with a sentence like "We're using a custom recorded game play demo that we will not provide to the public for <link to this issue... and there has to be a few links! :) >the following reason</link>"? It is just depressing and shouldn't have been necessary. NVIDIA and 3DMark03 are the unfortunate debut examples about the whole concept of trusting reviews. I am more worried about this in general terms than "NVIDIA" and "3DMark03" being the keywords.
 
Rev,
While I (and a few others) may have issues with your posture concerning nVidia as a whole, I would never believe for 1 second, that you have any bias where reviewing is concerned. In fact, the same goes for everyone here at B3d....hell, that's why I'm here. :D
 
Actually, after checking around a few of the well-known websites, I'm left wondering if NVidia haven't already 'gotten away with it'.

ExtremeTech have 'called out' NVidia but have not yet received a response. B3D have noted the image errors and are also awaiting a response from NVidia.

HardOCP have posted the news story, but given an incredibly pro-NV spin with mud-slinging and other distractional techniques. NVNews have noted the story but then not given any opinion except for asking if the pictures should have been released in the first place ( :?: ). The Inquirer has mentioned the ExtremeTech story.

On the other hand, well-known sites such as Anand, Tom, FiringSquad, xbit and the Register haven't even mentioned this on their news pages! Surely this is, at the very least, newsworthy?

Perhaps NVidia are using their influence on some of these sites to keep quiet in the hopes of riding out the storm? It will be interesting to see how NVidia responds to ET's article and B3D's questions.
 
Rev,
If you've had a chance to look at my suggestion, consider that you could make new timedemos with such a tool for each new review (or maybe just each new driver set), making them downloadable at the same time.
What remains nasty is that things like maybe preventing disk access and rebooting might also be necessary if the drivers are programmed to watch for recording in such a "demo creation" mode of a benchmark tool and creating new timedemo culling lists during that. :!: :-?

(off any minute, back tonight...so I won't be responding further for a bit).
 
Dunno if it's been mentioned before (7 pages are difficult to get through in 1 lunch break ;) ) but my take on this is that reviewers should place less emphasis on raw numbers and more on the feel of a game during gameplay at different resolutions/detail settings. Most reviews these days (especially in printed media) base their "score" almost solely on whether it beats the previous benchmark champion which to me is a nonsense given the heated debate on the legitimacy of optimisations etc.

I know that the "feel" of a game is subjective and therefore direct comparisons with different systems might be difficult, but surely some numbers and good quality descriptions on how it looked and felt during gameplay would give consumers a much better appreciation of the strengths and weaknesses of a particulary card.

as an example, if card X had a higher benchmark in UT2003 but card Y gave a smoother experience overall (higher minimum fps etc) then to me card Y would be a better choice.
 
BenSkywalker said:
I have the drivers(44.03) installed now and am getting the 'no clip' artifacts in the sky of SeriousSam(first one, not SE). I have a screenshot if anyone can host it.

ssclip.jpg


MuFu.
 
Reverend said:
Would anyone oppose the idea of using my own recorded demos for benchmarking in video card reviews that I won't make available for download? Would anyone question me? Or Dave? How important is it that these demos are to be made publicly avaialble for download, for the public to verify my own results? Should I have to point out to someone who doesn't know about this NV/3DMark03 incident (it's possible... they could be new to the Internet) the reason why I can't/won't provide him with such recorded game demos if he asks me for it? Should every review be precipitated with a sentence like "We're using a custom recorded game play demo that we will not provide to the public for <link to this issue... and there has to be a few links! >the following reason</link>"? It is just depressing and shouldn't have been necessary. NVIDIA and 3DMark03 are the unfortunate debut examples about the whole concept of trusting reviews. I am more worried about this in general terms than "NVIDIA" and "3DMark03" being the keywords.

I have no problem at all with this, Rev, for the reasons others have stated above. Nobody questions your bias. In fact, it could be an opportunity to "customize" some benchmarks to target certain extreme circumstances when reviewing a new card.

-Chris
 
Disregard this post, I was just being blind.


Another thing, news post regarding the ET article seems to have been removed at www.futuremark.com. I find that a bit disturbing, do that mean that Futuremark has decided to ignore the whole issue?
 
Well, I believe two major games/benchmarks used in benchmarking showing such issues should be sufficent to consider it a very major issue.

Considering the screenshot MuFu just posted, I'm updating my POV to "very major issue, nVidia is cheating badly and should be ashamed of themselves."

Bad nVidia, bad!


Uttar

EDIT: Replaced "games" by "games/benchmarks" for clarity.

Also, Rev, it's fairly obvious using private recorded demos is 100% fine. It wouldn't be too fine from sites who got no idea what they're talking about, but from B3D, it's a really good thing :)
 
Tim said:
Another thing, news post regarding the ET article seems to have been removed at www.futuremark.com. I find that a bit disturbing, do that mean that Futuremark has decided to ignore the whole issue?

Just go to FutureMark's home page, and under the "Latest News", click on "All News Headlines". Check the headlines for Thursday: "Driver Irregularities"
 
Joe DeFuria said:
Just go to FutureMark's home page, and under the "Latest News", click on "All News Headlines". Check the headlines for Thursday: "Driver Irregularities"

Sorry I don't know how I missed that. I even went back to double check before I made the post and still couldn’t find the news post.

It is a good thing that I have an appointment at the optician tomorrow, I really need new glasses.
 
A few years back ZD's 3DWinbench was invalidated in the eyes of many by an IHV optimising their driver for the BM at the expense of 'real' application performance. Anyone care to name the IHV ?

Basically nothing new here, can we move on now please ?
 
JohnH said:
Basically nothing new here, can we move on now please ?
The fact that it isn’t anything new does not make it any better; in fact it makes the whole issue worse. The problem has existed for years it is about time something being done about it.
 
JohnH said:
A few years back ZD's 3DWinbench was invalidated in the eyes of many by an IHV optimising their driver for the BM at the expense of 'real' application performance. Anyone care to name the IHV ?

Basically nothing new here, can we move on now please ?

Sorry, but in my eyes Winbench wasn't invalidated--the products by the companies who cheated on it were. See no difference today. All benchmarks can be cheated. Period. That does not invalidate them. It invalidates the companies who cheat them, IMO.
 
Back
Top