nelg said:
I would like to see a form of punishment in place whereby if a IHV is found to be cheating its drivers containing the cheats would be disqualified. Furthermore the next driver release from that company should automatically have its score reduced by 10%-20% for period of time. Consider it a form of probation. This would give Futuremark and others time to investigate and would prevent IHV’s from continually releasing drivers for the purpose of circumventing the guidelines.
Well, I don't think this is a good idea. The penalty idea just gives bad info to pre-empt other bad info. That just adds confusion and works to dilute the foundation of good results on Futuremark's part, and gives valid reasons for people to continually view their results with doubt when they should be working towards the exact opposite occurrence.
Joe's and Tommy's comments in the
other thread cover the basic idea of enforcement pretty completely, except that I think some of it is not realistically sustainable against the types of tactics nVidia has shown a willingness to use (and achieved some success with, even while Futuremark had a hard line enforcement approach).
...
I'm all for pointing out which drivers have failed their guidelines, and I'd propose it might very well be a required
first step to regain some measure of consumer trust in the face of the amount of bad information associated with 3dmark 03 that is out there.
What I think is important to make distinct (both when discussing and proposing such to Futuremark, and for Futuremark's dealings with IHVs moving forward) is the idea of what to do
now to start to recover trust, which would be a temporary and directed action to correct the current situation, and what to do
afterwards as a policy to defend and maintain that recovery.
I think first we need for Futuremark to recognize the situation from the individual consumer perspective, whereas I currently think they view many recommendations in terms of ongoing policy only, instead of distinctly considering a more drastic "course correction" separate from a "steady course" from then on out. Much of what has been proposed about the rules might benefit from being addressed more clearly as directed towards one of these or the other.
My thoughts directly relating to ongoing policy:
It doesn't seem feasible to maintain an ongoing policy of pointing to bad drivers and results, but what might be is the much less onerous (and less "political") one of auditing driver result ranges for a card for a current driver revision, and then certifying them (perhaps quarterly, though monthly would be better if possible). It is much easier to defend granting approval than to be actively pursuing condemnation...let the consumer confidence in what "certification" means to allow them to have the knowledge to decide when
they should have reason to condemn IHVs.
The comments about Futuremark discretion in evaluation need to apply to this, and places Futuremark's interest in such discretion more perceptibly (to consumers) in line with defending the honesty and reputation of the benchmark. Futuremark needs to work to formalize and convey reason for confidence in that discretion...I think a certification methodology achieves that without adding the further burden of trying to go toe-to-toe and "penalize" billion-dollar companies head-on on an
ongoing basis. Also, an option for audits conducted on request, along with extended testing (perhaps involving certain BETA members without conflicting interest, I think would make sense) for audits, limits the expenditure on thorough investigation (when signficant departures in performance results occur) and all the attendant effort that might entail, while still protecting the consumer even when the efforts aren't presently occurring.
I'd initially hoped for exactly this type of certification system, but self maintaining (by consumers), for image quality in the image quality/driver version database idea I'd proposed for 3dmark at the beginning of the year, but in light of the education I (and most of us, I believe) have received about the measures an IHV may use for deception in benchmarks, extending it to drivers and performance result ranges seems to be the simplest additional step.
Of course, offering better tools in their benchmark could still serve to make the self-regulation by users more feasible again. This is a factor I think it important for Futuremark to consider as well.
In any case, I think the movement towards "certified" results and "uncertified" results has already been demonstrated as necessary, that the ORB database could be modelled on the concept with limited modification, and could serve to facilitate this mechanism while not impeding consumer usage for timely hardware comparison (a foundation of Futuremark's "enthusiast" popularity which I don't think should be forgotten). Perhaps even an additional label, such as "overclocked" (or perhaps something a bit more "inspired"/"catchy", like "extreme" or whatever phrase conveys some positive connotation but refrains from implying that they are "verified") for results that exceed certification criteria concurrent with higher card frequencies being recorded at benchmark time, to address the enthusiast appeal that serves/served to maintain the benchmark's popularity.
...
Thoughts?