Joe DeFuria
Legend
http://www.hardocp.com/article.html?art=NDMwLDE=
And yet, you have not demostrated that you understand what the 3D Mark score represents. Furthermore, I believe that you have an issue not with what FutureMark deems valuable (who got input from all IHVs), but with what nvidia deems NOT valuable.
Not necessarily true. To the extent that IHVs concentrate on "artifically" raising synthetic benchmarks, you are right. However, this applies to ANY benchmark, be it a "game" benchmark or a synthetic one.
I seem to recall a big stink made by HardOCP when some vendor had a GAME SPECIFIC path in their drivers that lead to gasp, optimization in a game benchmark....
You really believe that? Or did the new "miracle 3DMark drivers" from nVidia magically appear the same day that 3DMark was released....without nVidia having prior access to some form of the benchmark?
Right. Not through any forcible measure from FutureMark.
Again, gross oversimplification. (Just because drivers run a synthetic benchmark better, does not mean they do that to the exclusion of "real games" getting better performance as well.)
I certainly agree that there is a very skewed over reliance on certain benchmark scores. Where I disagree, is who's PROBLEM this really is, and what the remedy is.
This is a good thing.
Bingo.
The fault lies with PRINT AND WEB PUBLICATIONS that mis-use the benchmarks. INGNORING synthetic benchmarks is not the solution. Recongizing what the the synthetic benchmarks represent, and using them accordingly is the answer.
Again, agreed. But again, the solution is not to IGNORE the "one number results", but to understand what they represent, and use the results in proper context.
Agreed.
Again, the question is whether the tool is "bad", or if it's being used improperly. Using a tool improperly is just as bad or worse than using a bad tool in the first place.
Taking a tool, and just "not using it" because it can be misused is nonsense. ANY benchmark tool...even those based on actual, real, shipping games, can be misused.
See Gameguage.
Now of course, you have developers "optimizing" for these specific games to squeeze out that extra 2 FPS, even if there are other games that need more attention.
You're also going to get a LOT of "pushback" from most developers concerning using "games in development" for testing. Games in "development" do not represent the final performance of the game (probably not as optimal with performance), and no developer wants their game performance "pre-judged" based on pre-release benchmarks.
Sure...how about "Nvidia...the way it's meant to be played!"
Basically, you have not given any solution to the problem of "how do we get an indication of how this hardware run TOMORROW'S games?" All the benchmarking of "today's games" don't mean squat if "tomorrow's games" are significantly different.
Having an "independent 3rd party" (like FutureMark), come up with a BEST GUESS, based on IHV input, seems like a reasonable course of action.
In short, just IGNORING synthetic benchmarks is not the solution, that does a disservice to the consumers you claim you are out to protect. Putting synthetic benchmarks in the proper context in a review is.
At this moment in time, I find it hard to place any real-world value on the 3DMark03 score, as it does not represent anything but specific tests that FutureMark deems valuable.
And yet, you have not demostrated that you understand what the 3D Mark score represents. Furthermore, I believe that you have an issue not with what FutureMark deems valuable (who got input from all IHVs), but with what nvidia deems NOT valuable.
To put it simply, current synthetic benchmarks overall, do a disservice to the hardware community and everyone that will ever buy a 3D video card or a computer that has one installed.
Not necessarily true. To the extent that IHVs concentrate on "artifically" raising synthetic benchmarks, you are right. However, this applies to ANY benchmark, be it a "game" benchmark or a synthetic one.
I seem to recall a big stink made by HardOCP when some vendor had a GAME SPECIFIC path in their drivers that lead to gasp, optimization in a game benchmark....
ATI has had advance copies of the benchmark, where NVIDIA has not,
You really believe that? Or did the new "miracle 3DMark drivers" from nVidia magically appear the same day that 3DMark was released....without nVidia having prior access to some form of the benchmark?
this is because NVIDIA will no longer pay FutureMark a “subscription fee”.
Right. Not through any forcible measure from FutureMark.
The short answer to the question is that it does not benefit us at all, but rather harms us. While NVIDIA and ATI are slaving away to make sure that optimizations are built into their drivers so they get better benchmark scores as illustrated above, the gamer's true experience gets ignored.
Again, gross oversimplification. (Just because drivers run a synthetic benchmark better, does not mean they do that to the exclusion of "real games" getting better performance as well.)
I certainly agree that there is a very skewed over reliance on certain benchmark scores. Where I disagree, is who's PROBLEM this really is, and what the remedy is.
We have been reaching out to the game developers for a long time to ask them to include the right tools to allow us to natively benchmark popular games.
This is a good thing.
Ask a GPU/VPU maker and they will tell you that they have to optimize for synthetic benchmarks. That is how their products get rated in print and web publications.
Bingo.
The fault lies with PRINT AND WEB PUBLICATIONS that mis-use the benchmarks. INGNORING synthetic benchmarks is not the solution. Recongizing what the the synthetic benchmarks represent, and using them accordingly is the answer.
This is why we see “point and click” benchmarks that give you a one number result have so much impact. Sometimes simplicity seems to outweigh data value in benchmarking.
Again, agreed. But again, the solution is not to IGNORE the "one number results", but to understand what they represent, and use the results in proper context.
Overall the ultimate responsibility of giving benchmarks traction falls into the laps of every computer hardware reviewer and editor in the world. We are the ones that give benchmarks credence and value in the community.
Agreed.
If we use bad tools, so will everyone else. If editors and other decision makers are using these tools and basing conclusions on their results, companies like ATI and NVIDIA have no choice but to allocate huge resources to make sure they score well.
Again, the question is whether the tool is "bad", or if it's being used improperly. Using a tool improperly is just as bad or worse than using a bad tool in the first place.
Taking a tool, and just "not using it" because it can be misused is nonsense. ANY benchmark tool...even those based on actual, real, shipping games, can be misused.
Maybe separate benchmarks to pick from that are based on games that are shipping or are currently in development is what is needed for a proper evaluation process.
See Gameguage.
Now of course, you have developers "optimizing" for these specific games to squeeze out that extra 2 FPS, even if there are other games that need more attention.
You're also going to get a LOT of "pushback" from most developers concerning using "games in development" for testing. Games in "development" do not represent the final performance of the game (probably not as optimal with performance), and no developer wants their game performance "pre-judged" based on pre-release benchmarks.
Maybe we need an organization with a logo that can be included on game boxes so you know the game you are buying is part of the solution in getting better products to your hands.
Sure...how about "Nvidia...the way it's meant to be played!"
Basically, you have not given any solution to the problem of "how do we get an indication of how this hardware run TOMORROW'S games?" All the benchmarking of "today's games" don't mean squat if "tomorrow's games" are significantly different.
Having an "independent 3rd party" (like FutureMark), come up with a BEST GUESS, based on IHV input, seems like a reasonable course of action.
In short, just IGNORING synthetic benchmarks is not the solution, that does a disservice to the consumers you claim you are out to protect. Putting synthetic benchmarks in the proper context in a review is.