Gamersdepot with a review with 3DMARK 03 and 53.03

worm[Futuremark said:
]
{Sniping}Waste said:
Nick put a thread in the futuremark 3DMARK 03 fourm for ppl to post site that are using un approved drivers so others can help you out.
That might help us to find more reviews using non-approved drivers with 3DMark03, but I am still sure that we need to prevent the use of non-approved drivers with 3DMark03 rather than waiting for reviews to be posted, and then contact them. Don't you agree?
YES! But we've kind of been telling you that for a few months now and nothing has been done about it, nothing at all. :(

Too little, too late; and ya have no one to blame but yourselves and the big, evil "N". :(
 
MrGaribaldi said:
worm[Futuremark said:
]
Bouncing Zabaglione Bros. said:
All the more reason to put this function directly into the application instead of just crossing your fingers and hoping people will follow the rules. Heck, your own 3DMark members don't follow your guidelines - maybe you should start cleaning that up before worrying about reviewers?
Putting some new anti-detect system into 3DMark would be possible, but as we have discussed several times before, there is no 100% proof system. :? Until someone comes up with a perfect solution which works 100%, it is not a very feasible option. If 3DMark would detect driver versions and inform the user if the driver is approved or not, it would mean that we would need to patch 3DMark everytime a new driver is released and approved..

Not neccessarily...

3DMark allready use access to internet to publish the results from the testing. Why not have it download a list of approved/disapproved drivers when it encounters an unknown driver?

So when you start 3DMark you'll get a window telling you that 3DMark doesn't recognize the driver beeing used and that it wants to connect to the internet to download the latest list of drivers from futuremarks site.

If the computer doesn't have net access, let the user be able to download the list manually and copy it onto the test computer. You'll have to encrypt the list of course so as to avoid any tampering.

The only problem with such a solution would be that you'd have to keep abreast of every new driver being released, but you allready do that.

This would also help inform your users about the problems with cheating drivers.

This is the second time this has been surgested on B3D - and it is such a good idea.

The most important part is is informs the 3DMark user what drivers are not/are accetable before they run the benchmark.


Worm,

While I applaud you current efforts (benchmarking best practices/accetable rules).

You've been very reactive in this situation (when it's too late as the invalid review scores/results are already 'online').

This 'driver validation' before the benchmark woukld be useful proactive step to Increase awareness (of 3dmark users) and stop atuall illegal so-called "optimsations" (driver cheats).

The disadvantage of requireing an Internet connection (for driver validity checks) is much less of a problem now, as many people have 'always-connected-24-hours-a-day' broadband internet connections.

People don't view updates as such a problem as they have in the past.

They are already automatically downloding - MS Windows Updates and Virus Definition updates almost every day. Requireing an Internet connection for a 'driver validation check' to 3dmark03 is no longer the problem it would of been in the days of 'Dial-up' slow internet connections.
 
What I would like to see is a list of detected cheats in every driver that is not aproved. It's getting annoying the same old: "We can't use 52.xx because of cheats in PS2.0 test, so we are using a later driver"
 
digitalwanderer said:
YES! But we've kind of been telling you that for a few months now and nothing has been done about it, nothing at all. :(
Have you checked the ORB lately? We have clearly posted information about approved drivers there, and the results are also "labelled" either as non-approved or approved. Also the search parameters have FM-Approved as default. Also, we did send out emails to our press contacts about the 3DMark03 Reviewers Guide and about Approved drivers. We also contacted some sites about their reviews. Still think we have done nothing? As far as I can see, we are the only ones even trying to make the situation better.

What we will do in the near future about reviewers using non-approved drivers is still in the works. If all goes well and as planned, we'll have something done this week. Collecting all possible ideas, reviewing them etc. takes time and resources. Really, things don't happen as quickly as you like to think. We don't want to rush out with some half-semi-maybe-working solution which we then need to revise after 2 weeks of use.

An "autoupdater" to 3DMark sounds nice, and would be great IMO, but still it would require "extra work" for reviewers to download any possible updates, and AFAIK they don't even have their systems connected to the net while testing. Now, the real problem has been, or is, that some feel that reverting to older (approved) drivers is too much work. Ok, so what good does the autoupdater then do? Inform that "hey, you are using non-approved drivers"? Nice and dandy, but if the reviewer isn't going to use the approved drivers anyhow, what good would it really do? I am just asking as I know that creating such a feature to the existing 3DMark series would require quite a lot of work. Would it really make a difference how reviewers look at using 2 sets of drivers? Sure they might be kept informed more often than some monthly newsletter (or something), but would it change the end result? :? I personally would like to have an autoupdater for 3DMark and PCMark. You know, update the system info etc. whenever needed, and possibly something else, but at the end of the day it is up to the reviewers what they do and use. We can only inform them so much, and the rest is totally up to them.

vb said:
What I would like to see is a list of detected cheats in every driver that is not aproved. It's getting annoying the same old: "We can't use 52.xx because of cheats in PS2.0 test, so we are using a later driver"
If any drivers have 3DMark03 specific optimizations, they are already against the Optimization Guidelines. If you want more information about the guidelines, check this out:

http://www.futuremark.com/companyinfo/Enforcement_Process.pdf
 
worm[Futuremark said:
]
Have you checked the ORB lately? We have clearly posted information about approved drivers there, and the results are also "labelled" either as non-approved or approved.
Last time I checked the ORB all I noticed was a "WHQL" or "driver status uncertain" in a tiny little box that I had to hunt for. :(

Also the search parameters have FM-Approved as default. Also, we did send out emails to our press contacts about the 3DMark03 Reviewers Guide and about Approved drivers. We also contacted some sites about their reviews.
Woo-hoo. :rolleyes: The sites you e-mail are probably the big3 and I've heard of more sites that you have NOT contacted about their reviews compared to ones you have. :(

Still think we have done nothing?
Yup.

As far as I can see, we are the only ones even trying to make the situation better.
Well, I suggest your company pulls it's head out of it's anal orifice so they can see a bit better. :rolleyes:

Seriously Worm, it's getting to be a bad joke and you're trying to say "Well give us a little time, things don't change overnight"....when you should be saying either "Things don't change" or "We'll look into it real soon".

Contact me when you guys give as much of a shit about your benchmark as I used to, then I might be interested in hearing what you have to say. :devilish:
 
digitalwanderer said:
Last time I checked the ORB all I noticed was a "WHQL" or "driver status uncertain" in a tiny little box that I had to hunt for. :(
Hmm, interesting as all personal feedback I have got has been more than good. You are the first one to say that you need to hunt for it.. But there will be a better info box very soon. Maybe even today?

digitalwanderer said:
Woo-hoo. :rolleyes: The sites you e-mail are probably the big3 and I've heard of more sites that you have NOT contacted about their reviews compared to ones you have. :(
Err, no. I am not sure to how many sites (or which ones) and magazines the email was sent the last time, but I think there were more than plenty. I am still surprised that so many sites didn't get it, so I have been collecting a bunch of sites to be added. Hopefully we have an even better coverage next time.
 
My apologies in advance Worm since I don't intend anything mean/rude but I simply can not ignore the funnay.

worm[Futuremark said:
]Err, no. I am not sure to how many sites (or which ones) and magazines the email was sent the last time, but I think there were more than plenty.
(Bolding mine)
So, is "more than plenty" equal to "a lot" or "a whole bunch"...these technical terms confuse me. :|
 
TO WORM

Irrespective of what YOU THINK youre benchmark is becoming a laughable commodity

Its youre livelyhoods for christ sake - the following steps should do it

STEP 1. DO NOT ALLOW ANY UNOFFICIAL DRIVER SCORES TO BE PUBLISHED FULL STOP

STEP 2. WATERMARK YOUR PRODUCT TO SAY WHEN UNOFFICIAL DRIVERS ARE USED

Its that simple really or cant you do this due to nvidia?-Seriously everybody here would like to know why you cant do the above two simple steps
 
Actually the more i think about this whole fiasco----

Why do futuremark allow nvidia to run the benchmark AT ALL --
Nvidia have already stated they will continue to CHEAT (they use the word OPTIMISE) when ATI dont.

Im seriously awaiting a reply from worm as to why NVIDIA have any official drivers at all -- fm themselves say the only official nvidia driver optimises for the PS 2.0 test

If nvidia cant produce an official driver surely due to the terms of fm's eula they cant run the bench FULL STOP.

People are waking up to the fact that NVIDIA IS DESTROYING THIS BENCHMARK

Why isnt FM????
 
I don't have any answer for the problems here, only questions -

How long do you think a benchmark will survive in regular use that disallows running comparative tests on hardware by any of the top IHVs - given that the purpose of its existence is to show comparative performance?

Given the reactions to events that have been seen so far, how likely do you think it is that people will choose to point the finger at anyone other than Futuremark if their benchmark will not run on any of the major IHV's hardware?

Where is the finger of blame for this situation being pointed currently? People keep saying that 'something must be done about the situation', but at the same time who are they saying should be doing something? Is this the right target in terms of blame, and if not then why are they being targetted at all?

Who is it that seems to be taking the brunt of the blame for the fact that question marks have been raised over the validity of the results from the benchmark, and this despite the fact that the 'Rules of the Competition' are laid out clearly and concisely for all involved, IHVs and review sites alike, to see and follow if everyone actually wants to have a fair competition.

What other 3D gaming benchmark in existence today has any rules or standards at all either suggested or appropriately enforced by anyone? Meanwhile, at the same time as people perceive some inability to carry out direct comparisons using 3DMark03, and complain about the manipulation of the benchmark results, it seems that the results of other benchmarks are being accepted as being de-facto 'correct' despite the fact that if no concerted attempt has been made to uncover problems then naturally no problems will be known.

How confident are people that any given reviewer or site that finds it too much trouble to wind back to a driver from a pre-approved list for a benchmark comparison (where the entire approval process has been done for them, and without any additional work on their part) is at the same time doing enough legwork to prevent manipulation of other benchmark results, remembering that to detect possible driver 'short-cuts' is likely to require a great deal of effort - at a minimum a highly detailed image quality analysis and comparison is needed between driver revisions, across multiple different pieces of hardware.
 
andypski said:
Given the reactions to events that have been seen so far, how likely do you think it is that people will choose to point the finger at anyone other than Futuremark if their benchmark will not run on any of the major IHV's hardware?
True dat, but then the burden still lies on FM to make people aware of who is really responsible and why....caving in to one IHV's demands/whims isn't exactly going over really big either it seems. :(

Where is the finger of blame for this situation being pointed currently? People keep saying that 'something must be done about the situation', but at the same time who are they saying should be doing something? Is this the right target in terms of blame, and if not then why are they being targetted at all?
nVidia doesn't have anything to gain apparently from running 3dm2k3 fair, why would they go out of their way to co-operate? FM & nVidia are in a bit of an adversarial position until nVidia starts abiding by FM's rules and enforcing those rules is FutureMark's job and responsibility to keep the credibility of their benchmarks valid.

How confident are people that any given reviewer or site that finds it too much trouble to wind back to a driver from a pre-approved list for a benchmark comparison (where the entire approval process has been done for them, and without any additional work on their part) is at the same time doing enough legwork to prevent manipulation of other benchmark results, remembering that to detect possible driver 'short-cuts' is likely to require a great deal of effort - at a minimum a highly detailed image quality analysis and comparison is needed between driver revisions, across multiple different pieces of hardware.
I hate to tell you this Andy, but I'd really have to say that most of the people who read these reviews wouldn't even understand what you said or why and they just aren't interested. If it's difficult to understand most people just won't bother.

That's why FM has to make it impossible for their results to be misinterpretted or they're dead in the water, period.
 
worm[Futuremark said:
]
An "autoupdater" to 3DMark sounds nice, and would be great IMO, but still it would require "extra work" for reviewers to download any possible updates, and AFAIK they don't even have their systems connected to the net while testing.

I'm not quite sure how much more extra work for the reviewers it would be to download a list of approved/unapproved drivers, considering they're allready downloading the latest drivers from the different IHV's...

And if they don't have the computer they test on connected to the net, they still have to copy the drivers for the card they're testing, at which point a manual download of the list of drivers shouldn't be too hard to do at the same time...

worm[Futuremark said:
]
Now, the real problem has been, or is, that some feel that reverting to older (approved) drivers is too much work. Ok, so what good does the autoupdater then do? Inform that "hey, you are using non-approved drivers"? Nice and dandy, but if the reviewer isn't going to use the approved drivers anyhow, what good would it really do?

Herein lies the real problem imo, as the reviewers feel, imo, it's too much work to test with 2 different driver versions, when only one set is partially approved...

"Why bother to test with an older driver set, when the result still won't be compareable to cards from another IHV?" That is what I think many reviewers feel about it, even though it's only the PS2.0 test that is incomparable.

To help with this, maybe change the text from
The 52.16 drivers have 3DMark03 specific optimization for the Pixel Shader 2.0 test and that score is solely comparable between nvidia cards.
to
The 52.16 drivers have 3DMark03 specific optimization for the Pixel Shader 2.0 test and that score is solely comparable between nvidia cards.
All the other test have been verified to be free of optimizations, and can thus be compared to cards from other IHV's

At least this way there won't be any confusion on the part of the reviewers because they didn't read the text properly...


worm[Futuremark said:
]
I am just asking as I know that creating such a feature to the existing 3DMark series would require quite a lot of work. Would it really make a difference how reviewers look at using 2 sets of drivers?

To be very blunt, what would be so hard about it?
"All" you'd have to do is create a method which checks if the driver being used is on a list of approved/disapproved drivers. If not on the list go online and download the latest list. If no net access can be found, prompt the user for a location to find the file (along with where the user can download the list manually). Then if the drivers is unapproved, pop up a big window saying such, and list the approved drivers for that vendor.

It doesn't have to tie in with (allmost) anything else in the program, and will have no effect on testing & such, unless you choose to implement a watermark if the driver is not approved.

I can ses that there is quite a bit of work if it had to be done from scratch, but wouldn't you be able to use part of the code used for the online result browser?


As for if it would make a difference, yes I think it would. You would be told every time what drivers are approved, and it would be in an "in your face" kind of way it'd be hard to ignore.

worm[Futuremark said:
]... it is up to the reviewers what they do and use. We can only inform them so much, and the rest is totally up to them.

Yes, you can only do so much up to a point, but I think most of this board (who still cares about 3DMark) feel that that point is far from reached, and won't be reached until it's obviously blatant to any person running 3DMark..


On another note, it'll be interesting to see what "special" thing you're talking about that might be released this week...
If it's another patch, it'll hopefully remove all optimizations in at least one driver revision so we won't be in the position we are with 52.16
 
digitalwanderer said:
True dat, but then the burden still lies on FM to make people aware of who is really responsible and why....caving in to one IHV's demands/whims isn't exactly going over really big either it seems. :(

...

nVidia doesn't have anything to gain apparently from running 3dm2k3 fair, why would they go out of their way to co-operate? FM & nVidia are in a bit of an adversarial position until nVidia starts abiding by FM's rules and enforcing those rules is FutureMark's job and responsibility to keep the credibility of their benchmarks valid.

...

I hate to tell you this Andy, but I'd really have to say that most of the people who read these reviews wouldn't even understand what you said or why and they just aren't interested. If it's difficult to understand most people just won't bother.
I hate to tell you this Digi (;)) but your position would seem to essentially advocate unregulated benchmarks. Although this might not be your intent, since you seem to want more regulation rather than less, it seems nevertheless to me to be a logical result of your argument.

Consider - a benchmark company tries to stand up for what is 'right' and demands certain standards of use for their benchmark. According to your stated philosophy they then become the focal point for all responsibility in enforcing those guidelines:
enforcing those rules is FutureMark's job and responsibility to keep the credibility of their benchmarks valid.
This places them in the highly unenviable position for taking all the responsibility for any flak that comes from their attempts to enforce the rules.

Meanwhile, again from your perspective, they can apparently expect little real support from the review sites, who let us not forget are the people who wield the information and decide what people read. In fact it seems that they may even encounter active opposition "I don't have to make as much effort to run my other benchmarks - what's wrong with yours?" - In this scenario, by taking a stand it seems that all they achieve is to go head to head against other companies, with no support from the reviewers whose readers apparently "Don't even understand and just aren't interested."

How can they possibly get their message across effectively in these circumstances?

Now - the alternative is that from day one they just sit back, take no position at all and allow whatever manipulation goes on without batting an eyelid or enforcing any rules. In this case they might find things far easier, as noone would really know if any manipulation was taking place, and they would not have to stick up for anything. Instead, just like every other benchmark, they can just shrug their shoulders and go "Maybe something bad is happening, but we just make the tool - it's up to reviewers to figure out if the scores are OK."

This is effectively the case for every other benchmark out there at the moment and people seem quite happy with this - noone is demanding, for example, that Epic enforce 'correct' rendering on UT2003, or that ID do so on QuakeIII.

And so we're back to square one.

Unless the community actively supports initiatives to have regulated benchmarks then it seems to me that all you will ever get is unregulated ones.
 
andypski said:
I hate to tell you this Digi (;)) but your position would seem to essentially advocate unregulated benchmarks. Although this might not be your intent, since you seem to want more regulation rather than less, it seems nevertheless to me to be a logical result of your argument.
Than either your logic is screwy or my explanation of what I think sucks because that was NOT my intent nor belief.

Consider - a benchmark company tries to stand up for what is 'right' and demands certain standards of use for their benchmark. According to your stated philosophy they then become the focal point for all responsibility in enforcing those guidelines:
enforcing those rules is FutureMark's job and responsibility to keep the credibility of their benchmarks valid.
This places them in the highly unenviable position for taking all the responsibility for any flak that comes from their attempts to enforce the rules.
Don't go in the kitchen if you can't stand the heat. When you claim you have an impartial benchmark you best be ready to back that claim up, otherwise don't make it.

Meanwhile, again from your perspective, they can apparently expect little real support from the review sites, who let us not forget are the people who wield the information and decide what people read. In fact it seems that they may even encounter active opposition "I don't have to make as much effort to run my other benchmarks - what's wrong with yours?" - In this scenario, by taking a stand it seems that all they achieve is to go head to head against other companies, with no support from the reviewers whose readers apparently "Don't even understand and just aren't interested."
Then find a solution that forces them to run the benchmark correctly or it won't run. If the reviewers are too ill-informed or lazy to run the benchmark right FORCE them to.

How can they possibly get their message across effectively in these circumstances?
Well if they can't find a way they should get out of the business because the way their currently going is bound to fail without some drastic changes. :(
 
digitalwanderer said:
andypski said:
This places them in the highly unenviable position for taking all the responsibility for any flak that comes from their attempts to enforce the rules.
Don't go in the kitchen if you can't stand the heat. When you claim you have an impartial benchmark you best be ready to back that claim up, otherwise don't make it.
The benchmark is impartial, but that doesn't prevent people from distorting the results through driver hacks. This applies to most benchmarks that are around: They are only impartial if the participants play fair.
How can they possibly get their message across effectively in these circumstances?
Well if they can't find a way they should get out of the business because the way their currently going is bound to fail without some drastic changes. :(
Futuremark has created an EULA for its software. Futuremark has described what are valid uses of its software and what are not. Obviously, Futuremark doesn't have the financial resources to track down every person who misuses the benchmark. Futuremark's best position is to continue as they have done: Release updated versions of the software that disable driver hacks and notify people what the approved driver versions are.

People say that Futuremark is doing a terrible job with their benchmarks, yet they are the only ones trying to ensure that the benchmark gives accurate results by working to disable driver cheats.
 
digitalwanderer said:
Well if they can't find a way they should get out of the business because the way their currently going is bound to fail without some drastic changes. :(
If that's true, then nVidia should have suffered much more greatly than it has as well, but as it stands the people who are affected most--the enthusiast community--are still far outweighed by other business concerns. OEM's trundle along in the paths they're used to, and the bulk buyers have no idea what's going on. Futuremark recently signed on deals with Microsoft, seems to have upcoming projects for the developing mobile arena... "Business concerns" still far outweighs (and hardly intersects) the likes of B3D and [H] and The Reg--even Anand's or Tom's, though they carry the most weight in this sector.

In the meanwhile, we still need to keep our eyes on the ball--all the balls--as they're being juggled, and try to coax things along as best we can. And that always means looking at and talking about the Big Picture(TM) in the proper frame of reference.
 
andypski said:
Consider - a benchmark company tries to stand up for what is 'right' and demands certain standards of use for their benchmark. According to your stated philosophy they then become the focal point for all responsibility in enforcing those guidelines:

Unfortunatley, FM are not demanding the same high standards of their members that they expect from their users. When is FM going to sanction Nvidia for continually cheating 3DMark2003 and bringing the benchmark into disrepute? When is FM going to kick Nvidia out of the program for cheating?

Pointing at end users and saying they are in the wrong for using unapproved drivers is not the source of the problem. Pointing at FM and saying "they are doing half a job, so that's better than no job at all" is not an adequate response.

Nvidia are pathologically cheating on the benchmark, and yet FM give them cart blanche and implicit approval of that cheating by not kicking them out of 3Dmark, publicly admonishing Nvidia, or otherwise sanctioning Nvidia for their continued cheating.

Tell me Andy, how do the people in ATI feel to know that Nvidia cards are beating ATI cards in the ORB, websites and magazines by cheating, and FM is doing nothing about it on your behalf? How does it feel to know you are being cheated out of the lead in an industry standard performance benchmark, and FM is complicit in that lie?
 
digitalwanderer said:
Don't go in the kitchen if you can't stand the heat. When you claim you have an impartial benchmark you best be ready to back that claim up, otherwise don't make it.
Making that sort of claim seemed to work OK for 3DMarks 99, 2000 and 2001 when there was no major attempts at regulation. What exactly is it that changed for 03 again? Oh yes, there was some minor campaign launched against the benchmark, that was reported in various forms on various websites (some of whom had no doubt been using Futuremark's tools in the past without either complaint or any significant concern as to whether they were 'impartial'), and also Futuremark had the audacity to actually try to make some guidelines for how the benchmark should be run, to attempt to protect its integrity.

Then find a solution that forces them to run the benchmark correctly or it won't run. If the reviewers are too ill-informed or lazy to run the benchmark right FORCE them to.
So, as well as alienating any IHV who desires to be, shall we say, 'creative' with your benchmark, also alienate all the reviewers who you want to have using it?

Sorry to be blunt, but that sounds like a winning strategy if ever I heard one. :oops:

People generally do not respond well if they feel like they are being forced into anything - even the far gentler strategy of saying "Use this driver please, because we've verified it's ok" is not exactly getting a warm welcome, now is it?

It has to be cooperative, otherwise it isn't going to work.

And that's why I believe that saying that the responsibility for making this work rests on only one set of shoulders, in this case Futuremark's, is rather naive.

Well if they can't find a way they should get out of the business because the way their currently going is bound to fail without some drastic changes. :(
I don't see many alternative solutions being suggested here except the "Force reviewers to comply" one, which I'm afraid I think is untenable, and the "Keep patching forever" one, which is similarly untenable because, as I've explained before, there are only so many ways to try to alter the 'fingerprint' of the rendering to defeat detection before you start affecting the rendering performance itself and hence invalidate the benchmark scores.

Sticking with one patch and verifying the drivers seems like a much more workable strategy, but reviewers cannot then be forced into accepting it. They have to want to have correct, impartial results enough to actually make some effort themselves, and if they don't then there's really very little that can be done

And if the only other option is to get out of the business, then I guess, once again, we're back to the "Advocating unregulated benchmarks" problem.
 
Real quick upfront, I hope none of you take my position on this personally and I don't mean to insult or offend in this discussion debating it. (With my rep I just thought it best to mention that. ;) )

OpenGL guy said:
Futuremark has created an EULA for its software. Futuremark has described what are valid uses of its software and what are not. Obviously, Futuremark doesn't have the financial resources to track down every person who misuses the benchmark.
No, but they have an awful lot of community volunteers who cruise most of the net regularly who'd be more than willing to help if they had some form of reporting mechanism and they actually followed up when they were notified of infractors.

Futuremark's best position is to continue as they have done: Release updated versions of the software that disable driver hacks and notify people what the approved driver versions are.
No argument from me on that one, I'd think that's the best of all solutions. The only problem is one of timeliness, they'll have to start putting out a new patch within about a week of an "irregular" driver release to have it be at all effective I think.

Can you make any comments about how the people at ATi feel about nVidia's cheating on 3dm2k3 and FM's lack of enforcing their own rules? Is it a non-issue or something that's discussed?

Heck, does ATi take 3dm2k3 very seriously anymore? :|

People say that Futuremark is doing a terrible job with their benchmarks, yet they are the only ones trying to ensure that the benchmark gives accurate results by working to disable driver cheats.
Yeah, but they're going to have to be a whole lot more aggressive/timely about it if it's going to be effective. One patch isn't going to cut it.

andypski said:
digitalwanderer said:
Don't go in the kitchen if you can't stand the heat. When you claim you have an impartial benchmark you best be ready to back that claim up, otherwise don't make it.
Making that sort of claim seemed to work OK for 3DMarks 99, 2000 and 2001 when there was no major attempts at regulation. What exactly is it that changed for 03 again? Oh yes, there was some minor campaign launched against the benchmark, that was reported in various forms on various websites (some of whom had no doubt been using Futuremark's tools in the past without either complaint or any significant concern as to whether they were 'impartial'), and also Futuremark had the audacity to actually try to make some guidelines for how the benchmark should be run, to attempt to protect its integrity.
So it's a trivial thing that we just shouldn't ever have known about? I don't get your point. :(

Then find a solution that forces them to run the benchmark correctly or it won't run. If the reviewers are too ill-informed or lazy to run the benchmark right FORCE them to.
So, as well as alienating any IHV who desires to be, shall we say, 'creative' with your benchmark, also alienate all the reviewers who you want to have using it?

Sorry to be blunt, but that sounds like a winning strategy if ever I heard one. :oops:

People generally do not respond well if they feel like they are being forced into anything - even the far gentler strategy of saying "Use this driver please, because we've verified it's ok" is not exactly getting a warm welcome, now is it?

It has to be cooperative, otherwise it isn't going to work.

And that's why I believe that saying that the responsibility for making this work rests on only one set of shoulders, in this case Futuremark's, is rather naive.
I'm sorry you think it's naive, I find it to be rather logic and right. If nVidia has proved anything to me over the past year it's that they really ain't interested in being cooperative over this and I don't see how anyone is going to make 'em do anything. :(

Sorry Andy; but I think in a perfect world your points would be valid and unfortunately we don't live in one. :(
 
Back
Top