Gamersdepot with a review with 3DMARK 03 and 53.03

digitalwanderer said:
OpenGL guy said:
Futuremark has created an EULA for its software. Futuremark has described what are valid uses of its software and what are not. Obviously, Futuremark doesn't have the financial resources to track down every person who misuses the benchmark.
No, but they have an awful lot of community volunteers who cruise most of the net regularly who'd be more than willing to help if they had some form of reporting mechanism and they actually followed up when they were notified of infractors.
Sure, but what could Futuremark do? As I said, I doubt they have the financial resources to take the offender to court. Posts on this forum have brought misuses of the benchmark to light, and that's a good start. If people keep sending feedback to the infractors, maybe things will improve.
Futuremark's best position is to continue as they have done: Release updated versions of the software that disable driver hacks and notify people what the approved driver versions are.
No argument from me on that one, I'd think that's the best of all solutions. The only problem is one of timeliness, they'll have to start putting out a new patch within about a week of an "irregular" driver release to have it be at all effective I think.
It's a tough problem. Think of it as an arms race (as one IHV put it). It takes time to come up with the counter to a new strategy.
Can you make any comments about how the people at ATi feel about nVidia's cheating on 3dm2k3 and FM's lack of enforcing their own rules? Is it a non-issue or something that's discussed?
No, I can't comment on this.
Heck, does ATi take 3dm2k3 very seriously anymore? :|
Of course! OEMs still use the benchmark, so you can bet that we keep an eye on things.
People say that Futuremark is doing a terrible job with their benchmarks, yet they are the only ones trying to ensure that the benchmark gives accurate results by working to disable driver cheats.
Yeah, but they're going to have to be a whole lot more aggressive/timely about it if it's going to be effective. One patch isn't going to cut it.
One? I think Futuremark has done more than that already...
 
OpenGL guy said:
Sure, but what could Futuremark do? As I said, I doubt they have the financial resources to take the offender to court. Posts on this forum have brought misuses of the benchmark to light, and that's a good start. If people keep sending feedback to the infractors, maybe things will improve.
All the more reason for a proactive approach that eliminates the situation in the first place. :)

Futuremark's best position is to continue as they have done: Release updated versions of the software that disable driver hacks and notify people what the approved driver versions are.
No argument from me on that one, I'd think that's the best of all solutions. The only problem is one of timeliness, they'll have to start putting out a new patch within about a week of an "irregular" driver release to have it be at all effective I think.
It's a tough problem. Think of it as an arms race (as one IHV put it). It takes time to come up with the counter to a new strategy.
Well in this particular arms race someone is WAAAAAY behind.

Can you make any comments about how the people at ATi feel about nVidia's cheating on 3dm2k3 and FM's lack of enforcing their own rules? Is it a non-issue or something that's discussed?
No, I can't comment on this.
Oh c'mon, we're all friends here and we promise we won't tell anyone.... ;)

Heck, does ATi take 3dm2k3 very seriously anymore? :|
Of course! OEMs still use the benchmark, so you can bet that we keep an eye on things.
Ok, good...then there is still a reason to care. With all the debate/arguing/disagreements lately I'd kind of lost touch with that.

One? I think Futuremark has done more than that already...
Nothing lately, and more specifically nothing to address this new rash of sites that not only are benching using un-official drivers unknowingly but sites that KNOW better and are doing it anyways.

Timeliness is essential. I know things take time, but if they can't figure out a way to insure the integrity of their benchmark in a consistant and timely manner than it just isn't going to matter much.

nVidia will win. It's not fair and it ain't right, but the team who cheats generally gets an unfair advantage. :(
 
digitalwanderer said:
So it's a trivial thing that we just shouldn't ever have known about? I don't get your point. :(
My point is that at the same time that you are arguing for them being more proactive in enforcing rules for their benchmark it is clear that they have done better in terms of public opinion in the past by doing nothing of the kind.

I'm sorry you think it's naive, I find it to be rather logic and right. If nVidia has proved anything to me over the past year it's that they really ain't interested in being cooperative over this and I don't see how anyone is going to make 'em do anything. :(
Why do you think that it's IHVs that have to cooperate with Futuremark? That's not what I said at all. What I said was that the if regulated benchmarks are going to work then the responsibility lies with the maker of those benchmarks and with the community.

If everyone wanted to cooperate then naturally there would be no need for any regulation at all. If there are parties that wish to manipulate benchmarks then it's quite simple -

- Can the makers of a benchmark, by themselves, force a party who wishes to manipulate their results in some way to comply with their benchmark regulations?

No.

- Can the makers of a benchmark, in association with support from the press and community, force a party to comply with the regulations of the benchmark.

Maybe.

They provide the tools, and then the community has to give active support so that the creators of the benchmark aren't left out on a limb. What pressure can they bring to bear on people to comply if they do not get support. You can't fight a war on two fronts.

This means that it is in the hands of the people writing the reviews to make it clear if they think that regulation of the quality of their results is a good thing.

Saying "I can't be bothered changing driver version to get valid benchmark scores" puts pressure on the benchmark maker, and greatly relieves the pressure from any parties who either (a) want the benchmark to fail for whatever reason, or (b) are manipulating the benchmark, because it tacitly supports their position.

Saying "X's current drivers are not approved because they break the guidelines for the benchmark and we have therefore instead used the latest approved drivers" instead of putting extra pressure on the benchmark maker, directs the pressure of public opinion firmly towards a possibly more productive target.

Sorry Andy; but I think in a perfect world your points would be valid and unfortunately we don't live in one. :(
On the contrary - in a perfect world I think your points would be valid, and the benchmark maker could force everyone to comply and we would get nice even playing fields.

In the imperfect world that we live in I don't believe for a minute that they can do it without help.
 
Here's my suggestion on how to "solve" the problem:

(1) 3DMark should check the driver version.
(2) If the version is known the benchmark may be run without going online.
(3) If the version is not known, 3DMark should go online and download information about that driver version from FutureMark's server.

The information about the driver should look like this:

driver nVidia 56.65
game test 1: 5% too high
game test 2: 25% too high
game test 3: 30% too high
etc...

This way 3DMark can run the benchmarks and later correct the results by reducing the scores according to the driver information.

What do you think?
 
madshi said:
Here's my suggestion on how to "solve" the problem:

(1) 3DMark should check the driver version.
(2) If the version is known the benchmark may be run without going online.
(3) If the version is not known, 3DMark should go online and download information about that driver version from FutureMark's server.

The information about the driver should look like this:

driver nVidia 56.65
game test 1: 5% too high
game test 2: 25% too high
game test 3: 30% too high
etc...

This way 3DMark can run the benchmarks and later correct the results by reducing the scores according to the driver information.

What do you think?

That would be horribly convoluted, if not nigh-on impossible. Apart from the fact that it would require FutureMark to test with every single board in an IHVs product range, how would you allow for overclocking, or simply different clock speeds between two different AIBs products?
 
andypski said:
I don't have any answer for the problems here, only questions -

How long do you think a benchmark will survive in regular use that disallows running comparative tests on hardware by any of the top IHVs - given that the purpose of its existence is to show comparative performance?

Given the reactions to events that have been seen so far, how likely do you think it is that people will choose to point the finger at anyone other than Futuremark if their benchmark will not run on any of the major IHV's hardware?

Where is the finger of blame for this situation being pointed currently? People keep saying that 'something must be done about the situation', but at the same time who are they saying should be doing something? Is this the right target in terms of blame, and if not then why are they being targetted at all?

Who is it that seems to be taking the brunt of the blame for the fact that question marks have been raised over the validity of the results from the benchmark, and this despite the fact that the 'Rules of the Competition' are laid out clearly and concisely for all involved, IHVs and review sites alike, to see and follow if everyone actually wants to have a fair competition.

What other 3D gaming benchmark in existence today has any rules or standards at all either suggested or appropriately enforced by anyone? Meanwhile, at the same time as people perceive some inability to carry out direct comparisons using 3DMark03, and complain about the manipulation of the benchmark results, it seems that the results of other benchmarks are being accepted as being de-facto 'correct' despite the fact that if no concerted attempt has been made to uncover problems then naturally no problems will be known.

How confident are people that any given reviewer or site that finds it too much trouble to wind back to a driver from a pre-approved list for a benchmark comparison (where the entire approval process has been done for them, and without any additional work on their part) is at the same time doing enough legwork to prevent manipulation of other benchmark results, remembering that to detect possible driver 'short-cuts' is likely to require a great deal of effort - at a minimum a highly detailed image quality analysis and comparison is needed between driver revisions, across multiple different pieces of hardware.

Here here.

However, I would say that the lack of support has a lot to do with the about face performed last year, followed by months of dithering. That's not the way to win community support. I'm not sure FutureMark will ever win it back without nVidia swinging behind them (which they won't do unless their card wins without cheats).
 
Hanners said:
That would be horribly convoluted, if not nigh-on impossible. Apart from the fact that it would require FutureMark to test with every single board in an IHVs product range
Hmmm... That's right. I didn't think about that the driver may behave differently depending on the chipset.
Hanners said:
how would you allow for overclocking, or simply different clock speeds between two different AIBs products?
I suggested to correct the score by using %. E.g. if 56.65 inflates the scores of game test 4 by 25%, devide the measured score of the benchmark run through 1.25. That should give correct results for any clock speeds.
 
madshi said:
I suggested to correct the score by using %. E.g. if 56.65 inflates the scores of game test 4 by 25%, devide the measured score of the benchmark run through 1.25. That should give correct results for any clock speeds.

I don't think that Futuremark has anything to gain with tampering with tampered scores. It would only give fanboys something to cry foul about, since it's an added penalty.

But what could be possible is that, in the scenario I outlined in my prev post, the popup window could tell the user ca. how much the driver ofsets the score in each test. This way the reveiwer is not only informed of the fact that (s)he is using an unapproved driver, but also of the effect that driver has on the score...

They could also add another "corrected" score to the result page to show how much the real score might be oftset, but it should only be as an extra, and not the only sore visible...

If 3DMark finds a way to get a good aproximation of how much the scores are inflated for each chipset, this might be a good way to put pressure on Nvidia through it's consumers...
I'm sure that most of Nvidias users have no idea about the cheating they do, based on experience from another board where non-techsavvy people come to ask for help on what card to buy, but this way they too will be told in an "in your face" way.
 
andypski said:
Bouncing Zabaglione Bros...

Don't use the 'C' word. You should know what can happen if you use the 'C' word...

I call it like I see it. In my opinion (and many others) these Nvidia cheats are not valid optimisations. What would you call it when hand tuned low quality shaders are hacked into a benchmark based on specific application detection?
 
andypski said:
Bouncing Zabaglione Bros said:
I call it like I see it.
While you may have that luxury, it does not necessarily follow that others do.

There are ways and means of saying the same thing even if you are not allowed to use certain words. Since FM's infamous climbdown, we've seen nothing from them, and certainly not since they started taking Nvidia's money again. We've not heard any comment from other IHV's who are part of the Futuremark programme.

How does it feel to know that Nvidia can put pressure on Futuremark and can get away with whatever it wants to? When is ATI going to start putting pressure on FM to stop Nvidia and other IHV's from cheating? When are you going to make FM stand up for *your* interests in the same way that Nvidia does?
 
andypski said:
My point is that at the same time that you are arguing for them being more proactive in enforcing rules for their benchmark it is clear that they have done better in terms of public opinion in the past by doing nothing of the kind.
Ahhhh! I get your point now, thanks for explaining it. :)

But of course I don't really agree... ;)

FM has done better in the past by doing nothing because we the consuming public had no clue what was going on. Now that we do have an idea at the lengths some desperate IHVs will go to cheat at the benchmark we hold FM to a higher standard to find a way to keep the playing field level in their benchmark.

I guess what I'm trying to say is it was ok to do nothing before because no one knew anyone was doing anything, but now that we know what we know our expectations are higher that they'll correct the problem. (I know, I really shouldn't post before my first couple pots of coffee get a chance to kick in! :rolleyes: )

They provide the tools, and then the community has to give active support so that the creators of the benchmark aren't left out on a limb. What pressure can they bring to bear on people to comply if they do not get support
The community HAS been actively supporting FM, but FM hasn't even taken action on that! What more can we do?

You can't fight a war on two fronts.
Sure you can, it's just victory isn't always that easy. ;)

in a perfect world I think your points would be valid, and the benchmark maker could force everyone to comply and we would get nice even playing fields.

In the imperfect world that we live in I don't believe for a minute that they can do it without help.
Well they'd best get a mechanism in place to utilize the communities help and they'd better darned best start to DO SOMETHING when the community points out problems or else the community will abandon FM and spit on 'em and stomp 'em. :(
 
digitalwanderer said:
So, is "more than plenty" equal to "a lot" or "a whole bunch"...these technical terms confuse me. :|
It should equal to "a lot", as that's what my online dictionary says. ;)

kkevin666 said:
Im seriously awaiting a reply from worm as to why NVIDIA have any official drivers at all -- fm themselves say the only official nvidia driver optimises for the PS 2.0 test
We have tested 52.16 drivers in-house and have determined that with 3DMark03 Build 340 they yield a valid and comparable 3DMark03 score in all other tests than the theoretical PS2.0 test. This is why those drivers are in the approved drivers list.

MrGaribaldi said:
I'm not quite sure how much more extra work for the reviewers it would be to download a list of approved/unapproved drivers, considering they're allready downloading the latest drivers from the different IHV's...
..or just check the Approved Drivers page for information on which drivers are approved? If that's something some find irritating, why would some update which tells them the same thing only in the software itself be any different?

MrGaribaldi said:
And if they don't have the computer they test on connected to the net, they still have to copy the drivers for the card they're testing, at which point a manual download of the list of drivers shouldn't be too hard to do at the same time...
But when they start the benchmark (assuming it would have the auto-updater) it checks for updates. Then they already have all drivers installed, and most likely would have unplugged the net. As I said above, I am not sure what big difference there would be if they would A) need to download manually an update + install it just to get the info on what's approved and what's not, or B) visit 1 page to see that same info. :? IMHO the "A" option is even more work than the option "B".

MrGaribaldi said:
Herein lies the real problem imo, as the reviewers feel, imo, it's too much work to test with 2 different driver versions, when only one set is partially approved...

"Why bother to test with an older driver set, when the result still won't be compareable to cards from another IHV?" That is what I think many reviewers feel about it, even though it's only the PS2.0 test that is incomparable.
Partially? IMO that sounds like only a small fraction is approved. We are talking about 1 theoretical test (out of 14 default performance tests ). Besides, most sites use the 3DMark score in their reviews. That result is comparable.

MrGaribaldi said:
To be very blunt, what would be so hard about it?
"All" you'd have to do is create a method which checks if the driver being used is on a list of approved/disapproved drivers. If not on the list go online and download the latest list. If no net access can be found, prompt the user for a location to find the file (along with where the user can download the list manually). Then if the drivers is unapproved, pop up a big window saying such, and list the approved drivers for that vendor.

It doesn't have to tie in with (allmost) anything else in the program, and will have no effect on testing & such, unless you choose to implement a watermark if the driver is not approved.

I can ses that there is quite a bit of work if it had to be done from scratch, but wouldn't you be able to use part of the code used for the online result browser?

As for if it would make a difference, yes I think it would. You would be told every time what drivers are approved, and it would be in an "in your face" kind of way it'd be hard to ignore.
You make it sound so easy. ;) In all honesty, I am not a coder so I can not say how hard it would be to implement. However, I know that it would require a lot of extra work. We would need to not only change the software itself, but we would also need to cook up the online service too. I am not sure how the program (3DMark03) was built, but I have a hunch that if we would add anything new to it (like the auto-updater) we would need to open up more than 1 piece of code in order to get it work properly. Sure it would be possible (hey, anything is possible!) but any changes we have to do in the software & online are pretty big tasks. If this feature would have been in the initial version of 3DMark, the whole deal would have been designed for it. Now adding it "on top" of everything, it easily becomes a "gum and string tuning" (dunno if that's the correct phrase) and that's something we don't really want. I personally think that an auto-updater would be very useful for many things in 3DMark (system info etc) but as the current design of 3DMark doesn't really support such a feature, adding it might cause too much trouble and grey hair. But never say never...

MrGaribaldi said:
Yes, you can only do so much up to a point, but I think most of this board (who still cares about 3DMark) feel that that point is far from reached, and won't be reached until it's obviously blatant to any person running 3DMark..
Hmm, I still believe that we have done quite a lot. We have enforced our guidelines, started the Approved Drivers testing etc. We have informed the media about this, posted it clearly on our website, made notifications in the ORB about it etc. I doubt that many who use 3DMark03 more than once have missed it by now. Just submit 1 result to the ORB, and you can see & read about it. Anyway, we will of course keep working on this, and hopefully all for the better.

MrGaribaldi said:
On another note, it'll be interesting to see what "special" thing you're talking about that might be released this week...
If it's another patch, it'll hopefully remove all optimizations in at least one driver revision so we won't be in the position we are with 52.16
Sorry, no patch. It isn't something that "special", and is targetted foremost at the online & offline media.

madshi said:
Here's my suggestion on how to "solve" the problem:

(1) 3DMark should check the driver version.
(2) If the version is known the benchmark may be run without going online.
(3) If the version is not known, 3DMark should go online and download information about that driver version from FutureMark's server.

The information about the driver should look like this:

driver nVidia 56.65
game test 1: 5% too high
game test 2: 25% too high
game test 3: 30% too high
etc...

This way 3DMark can run the benchmarks and later correct the results by reducing the scores according to the driver information.

What do you think?
It would be next to impossible as we would have to test all cards (chipsets) separately, and possibly with various motherboards, CPU's, memories, MHz's etc. just to be absolutely sure if the consistency. And I mean all IHV's cards, not only one company's. Besides, giving out any % numbers might only confuse users.
 
worm[Futuremark said:
]
digitalwanderer said:
So, is "more than plenty" equal to "a lot" or "a whole bunch"...these technical terms confuse me. :|
It should equal to "a lot", as that's what my online dictionary says. ;)
Thanks for the clarification Worm. :)

Please don't take this personally, but if it ain't a patch you're putting out it ain't going to be enough and I'll probably have a field day writing up an editorial that's very tongue-in-cheek about pointing that out and all of FM's other latest failures/blunders. :(
 
digitalwanderer said:
Please don't take this personally, but if it ain't a patch you're putting out it ain't going to be enough and I'll probably have a field day writing up an editorial that's very tongue-in-cheek about pointing that out and all of FM's other latest failures/blunders. :(
Well, it is not a patch for 3DMark03 or any other software. It is something else that we (you and me and others) have been talking about.
 
worm[Futuremark said:
]Well, it is not a patch for 3DMark03 or any other software. It is something else that we (you and me and others) have been talking about.
Message received and understood, my lips are sealed until then. :)

Thanks Nick. :D
 
If dig is happy about the answer then it must be good. Thanks nick. It would be nice to know a time frame on when things are goning to happen!
 
worm[Futuremark said:
]
MrGaribaldi said:
I'm not quite sure how much more extra work for the reviewers it would be to download a list of approved/unapproved drivers, considering they're allready downloading the latest drivers from the different IHV's...
..or just check the Approved Drivers page for information on which drivers are approved? If that's something some find irritating, why would some update which tells them the same thing only in the software itself be any different?

Worm you're not that dense are you? The whole point is that there shouldn't be a need to go to your Approved Drivers page every time somebody uses your benchmark(I would, but that's just me). The update in the software is there to officially notify them they may or may not be using approved drivers and to allow them the opportunity to have the 3DMark03 software to check via the Internet for the latest list. This wouldn't require the actual user to open up their browser and go to the page itself. Remember they have already started the software and are preparing to benchmark. So, what's so hard for the software to check the net provided they have a connection? They're already using the ORB right?

worm[Futuremark said:
]
MrGaribaldi said:
And if they don't have the computer they test on connected to the net, they still have to copy the drivers for the card they're testing, at which point a manual download of the list of drivers shouldn't be too hard to do at the same time...
But when they start the benchmark (assuming it would have the auto-updater) it checks for updates. Then they already have all drivers installed, and most likely would have unplugged the net. As I said above, I am not sure what big difference there would be if they would A) need to download manually an update + install it just to get the info on what's approved and what's not, or B) visit 1 page to see that same info. :? IMHO the "A" option is even more work than the option "B".

Huh, why is A even an option? You're not making sense. If they're not connected to the Internet(which I'm not sure everybody does this when testing), then the notice the software gives is at least a last warning before they proceed with testing. Let's say somebody downloads the latest drivers, latest benchmarks and their updates and just happens to forget to see if the latest drivers are approved. They then disconnect from the Internet install their drivers, benchmarks and updates and then proceeds to test using 3DMark03. After 3DMark03 starts it says "Your installed drivers are not on the approved list". It then gives 3 options:

1. Proceed Anyway
2. Connect to the Internet to get latest list of approved drivers
3. Quit

More than likely a reviewer that's not connected to the Internet isn't going to mess with quitting and then connecting to the Internet and check your Approved Drivers web page unless they have machine nearby that is connected. They're just going to proceed anyway and hope that they're approved. Though they may check later when they're connected to the Internet to check for sure. And that's what the notice is for. If there was no notice, then they would have no clue whatsoever that they possibly used drivers that were not approved. Hopefully the person that gets that notice will have some kind of nagging feeling that maybe they should check the list either through the software or actually visiting the page.

worm[Futuremark said:
]You make it sound so easy. ;) In all honesty, I am not a coder so I can not say how hard it would be to implement. However, I know that it would require a lot of extra work. We would need to not only change the software itself, but we would also need to cook up the online service too. I am not sure how the program (3DMark03) was built, but I have a hunch that if we would add anything new to it (like the auto-updater) we would need to open up more than 1 piece of code in order to get it work properly. Sure it would be possible (hey, anything is possible!) but any changes we have to do in the software & online are pretty big tasks. If this feature would have been in the initial version of 3DMark, the whole deal would have been designed for it. Now adding it "on top" of everything, it easily becomes a "gum and string tuning" (dunno if that's the correct phrase) and that's something we don't really want. I personally think that an auto-updater would be very useful for many things in 3DMark (system info etc) but as the current design of 3DMark doesn't really support such a feature, adding it might cause too much trouble and grey hair. But never say never...

I understand and agree. Providing this feature in the next version of 3DMark makes more sense, but only if it's coming out soon(next month or so). If next version isn't coming till late this year or with the release of DirectX 10/Next, then that's too long and should go ahead and add it to 3DMark03 ASAP.

worm[Futuremark said:
]
madshi said:
Here's my suggestion on how to "solve" the problem:

(1) 3DMark should check the driver version.
(2) If the version is known the benchmark may be run without going online.
(3) If the version is not known, 3DMark should go online and download information about that driver version from FutureMark's server.

The information about the driver should look like this:

driver nVidia 56.65
game test 1: 5% too high
game test 2: 25% too high
game test 3: 30% too high
etc...

This way 3DMark can run the benchmarks and later correct the results by reducing the scores according to the driver information.

What do you think?
It would be next to impossible as we would have to test all cards (chipsets) separately, and possibly with various motherboards, CPU's, memories, MHz's etc. just to be absolutely sure if the consistency. And I mean all IHV's cards, not only one company's. Besides, giving out any % numbers might only confuse users.

It wouldn't be so difficult. You're already using the ORB database results to give something similar for the Performance Analyzer and WindowsXP Game Advisor. If there's not enough data to give a comparison then just say "Sorry, we do not have enough data to give you an approximation of your test results. Please run again with approved drivers." Either way, they should only get official results with approved drivers. You shouldn't later correct the results based on somebody else's machine and test results. That would open up a totally different can of worms.

Tommy McClain
 
{Sniping}Waste said:
If dig is happy about the answer then it must be good. Thanks nick. It would be nice to know a time frame on when things are goning to happen!
I can't say for sure as it is not in my hands atm but hopefully this week, or the beginnin/during next week.

AzBat said:
Worm you're not that dense are you? The whole point is that there shouldn't be a need to go to your Approved Drivers page every time somebody uses your benchmark(I would, but that's just me). The update in the software is there to officially notify them they may or may not be using approved drivers and to allow them the opportunity to have the 3DMark03 software to check via the Internet for the latest list. This wouldn't require the actual user to open up their browser and go to the page itself. Remember they have already started the software and are preparing to benchmark. So, what's so hard for the software to check the net provided they have a connection? They're already using the ORB right?
Ok, it seems that I wasn't clear enough in my comment, sorry for that. I was merely refering to IF the user doesn't use the auto-updater, but rather downloads the "approved drivers list" and installs it manually. If the user has a net connection to his test system (which is what I have been told that very few have. I might be wrong though.) then the auto-updater might be useful for the approved driver listing. But that is just if. Am I making more sense now? I mean, downloading a patch manually, installing it manually and then starting the benchmark to see if the drivers are approved or not, is IMHO much more hassle than simply heading to one webpage to see the same thing.

Besides, I personally find it a tad odd that so few reviewers include compare URL's to their reviews. I mean, they usually have the Pro version, which means they can upload unlimited amounts of results, so why not use it? I know users would love to see that too as they can view the result in detail without even having to register to the ORB. So you could say that most reviewers do not use the ORB, or if they do, they don't talk about it in their reviews.

But the point is that no matter what auto-updaters or whatnots we would do, it is still up to the reviewer what he is going to do. If someone has set his mind on NOT to use our approved drivers, then no notifications or like will do any good. As I posted earlier, we will try to improve the flow of information, but at the end of the day, it is up to the reviewer/user what he uses. And before anyone jumps to any wrong conclusions, this does not mean that we would stop working around this! As said, we will continue to work on the ORB to inform users about the drivers, we will start sending out more information to the press (DW ;) ) and we will work on other solutions.
 
Back
Top