Bjorn, that last post seems to me to encapsulate things well enough so I'll leave things there.
Nagorak,
Nagorak said:
The point is: 3DMark is just an example of the IHVs hardware and driver coding ability.
Coding ability for the goal of generating big numbers in 3dmark. You think that goal is fine, I do not. To expand:
Although time spent optimizing for 3DMark may to some extent be a "waste", the same results there should apply across the board to other games since IHVs are known to optimize for all major games and hopefully provide minor developers the information to optimize their games for their cards.
? What are you basing this "should" on? I have no complaint about any optimization that affects games as well as 3dmark, why would I?
Saying that the card at the top of 3DMark isn't the fastest is just picking nits. Nothing in this world is black and white.
Eh? So if a GF3 Ti 200 was above a 9700 on 3dmark that wouldn't be a problem with the benchmark? No, wait, the 9700 is popular and ATI has clout, so you don't mean that. Perhaps you are referring to the Parhelia and its placement not being a problem, because as far you are concerned you don't like the card. It gives me a bit of a twilight zone feeling to have to point out that is a popularity contest and not benchmarking.
If you are just agreeing with my post you could quote my response to you before and just say "Yes". Though I'd still like a pointer in the direction of some info regarding your comments on the Kyro series cards.
Since you don't mention who you are replying to, it would be helpful if you used some representative text to respond to so your statements yielded more information. The above response is based on my best guess as to what you meant.
The problem is complexity is always balanced against ease of use. You can't make a benchmark that is both uber-complex and also user friendly.
How difficult something is to program does not speak to how difficult it is to use. In fact, difficulty in programming is often directly related to making something
easier to use. And what I'm talking about is for the makers of the benchmark to expend the effort, not the user...since they are the ones making the benchmark. Is reading more numbers and having tools to facilitate image quality comparison "uber-complex"? If you assert that, could you provide your reasoning?
For example, image quality comparison can already be done, but spending the effort to focus on facilitating that for a benchmark would make it easier for the user to do so. It would also make the benchmark more meaningful. Your statements read as if you are asserting this is not the case (or that atleast is my guess, they are non-specific so you could mean something else).
It would be nice if 3dmark had image quality equivalancy profiling or an introduced image quality databases as part of their service. There are several approaches that could be tried...since they make money off of this, to me it seems like this is something they should be expending effort towards if they are really trying to put out a quality product for their stated goals (benchmarking).
Could you clarify please, without undefined hyperbole like "uber-complex" and perhaps some reference to the discussion for a clearer picture of what you are addressing specifically, if my understanding of your comments is flawed?