Nagorak,
What you describe is running a popularity contest, not benchmarking. I would like to see the benchmarks of the many games where it "ran like total crap" for reference, though, to see how well the 3dmark problems reflected that.
Himself,
Himself said:
I did, wasn't impressed, I'm sure there is something 3D folks will find facinating and impressive technically there somewhere. Novamark especially gets my yawn award, to me is an array of trees and some simple terrain rendering with some acne trees.
Strange how your criteria, which you've also stated before this, for a good benchmark is solely based on the entertainment value. Or maybe not so strange, it seems Futuremark has a similar set of priorities.
I have no idea about the Parhelia, but I bet many of those Ti200s overclocked to Ti500 speed. Meanwhile I guess the Parhelia didn't overclock much if at all. Stock, the Parhelia is faster so maybe that isn't completely accurate, but even so I'd probably recommend the Ti200 over it anyway.
Can you see how little sense that chain of logic makes in a discussion about the validity of a benchmark? Again, this is benchmarking, not a popularity contest. 3dmark does not measure the cost of the card as part of its testing suite.
I guess my point is, for all that 3DMark is supposedly so inaccurate, it's results list is identical to how I'd rank the cards on the market today. Although I'd just leave the Parhelia off completely, because it is overpriced by an insane amount.
Entropy's response to Nagorak fits here so well, I'll refer you to it.
Love it or hate it, all IHVs can optimize their drivers for 3DMark, so it's a level playing field.
No, it is an
optimization contest, not a level playing field. The problem is it is an application specific optimization contest, not a general driver optimization contest. Do we have to go over every time relative 3dmark performance of products compared to game performance has indicated this problem? Or is the jump in 3dmark performance for the GF4 that "happened" to occur around 9700 release, for example, represented in games? (An honest question, I don't own a GF4, but comments have led me to believe otherwise).
The only hope I have for 3dmark 2003 or whatever is that with the right mix of shaders and with aids to image quality comparison, optimizing for it is actually likely to benefit games in general.
IHVs also optimize their drivers for games too, so I don't see why optimizing makes a comparison invalid.
Since the assertion is "invalid as an indicator of relative performance", I suggest you complete the sentence with that phrase and see if it still fits what you are trying to say. To me, when I do that it reads as something indefensible.
Honestly, both ATI and Nvidia have optimizations for Quake3...GOOD, it actually runs better!
Are you deliberately missing the point, or you just don't care if new driver releases are driven by the goal of optimizing for a benchmark you sit and watch instead of games you play? You seem to enjoy the benchmark a lot, so perhaps you just don't. I hope you'll note my comments don't attempt to tackle Quake III...or atleast they don't here, I've made comments about that elsewhere before. For the purposes of this discussion, I'll simply state 3dmark != Quake III.