Late, noisy, HUGE!!! - makes 17k Mark...

Status
Not open for further replies.
Are you deliberately missing the point, or you just don't care if new driver releases are driven by the goal of optimizing for a benchmark you sit and watch instead of games you play? You seem to enjoy the benchmark a lot, so perhaps you just don't. I hope you'll note my comments don't attempt to tackle Quake III...or atleast they don't here, I've made comments about that elsewhere before. For the purposes of this discussion, I'll simply state 3dmark != Quake III.

The only point i tried to make is that the same problems with "driver optimization contests" with regards to 3D Mark applies to UT2003 and Quake 3 also.
 
The point is: 3DMark is just an example of the IHVs hardware and driver coding ability. Although time spent optimizing for 3DMark may to some extent be a "waste", the same results there should apply across the board to other games since IHVs are known to optimize for all major games and hopefully provide minor developers the information to optimize their games for their cards.

Saying that the card at the top of 3DMark isn't the fastest is just picking nits. Nothing in this world is black and white. The problem is complexity is always balanced against ease of use. You can't make a benchmark that is both uber-complex and also user friendly.
 
Nagorak said:
I'm sorry to say but the truth is the Kyro ran like total crap in many other games. It should have been faster than the MX, but it was ignored by many developers and thus performed poorly. Also the lack of a TnL unit killed it, because although it was a faster "straight up" renderer than the MX, it had no TnL and when TnL games started hitting the market it died a quick death. So, 3DMark isn't as inaccurate as you might think.

yes the lack of hard TnL on the kyro is a problem, but the bad fillrate of the geforce 2 MX (tnt2 ultra level..) is even more of a problem...
in the games too T&L intensive for software T&L, almost every time the geforce 2 MX will perform worse than the kyro II because of its abysmal effective fillrate.

with todays CPU, software T&L is more than often enough.
(even on apps that won't run w/o hard TnL, using 3danalyzer)
but for evident reasons even the best CPU wouldn't make for the bad fillrate of the geforce 2 MX.

yes it's sad bad developper support is limiting kyro users, but i understand they won't spend too much time and money for this marginal chip.

for me the kyro II isn't dead, it is the card i use for my gaming, and i still enjoy it.. for example MOHAA.. and i don't have a top performer CPU, just an athlon XP 1500+

i sometimes play with another PC with a geforce 2 MX in it, and games are a lot less enjoyable due to its rather limited fillrate.

because i am able to use both cards, i can say that a the vast majority of games perform far better with a kyro II than w/ a geforce 2 MX, not exactly what you would expect according to 3dmarks...

you can see a lot of games benchmarks w/ the kyro II and the geforce 2 MX here:
http://www.mitrax.de/display-review.php?file=benchmarks.htm

and you can see the kyro II litteraly trashing the geforce 2 MX, except w/ giants.

and here you can see that even the 115 MHz kyro I wasn't so bad:
http://www.mitrax.de/display-review.php?file=benchmarks.htm

depending on the configuration it could trash the geforce 2 MX in the supposed T&L intensive Dagoth Moor Zoological Gardens Demo...

look here too for even more kyro benchmarks:
http://www.3dcenter.de/artikel/kyro/testsysteme2.php

from what i know i would say that most of the time the kyro was equivalent to the geforce 2 MX and the kyro II equivalent to the geforce 2 GTS. it's what real performance in real games shows..
 
Bjorn, that last post seems to me to encapsulate things well enough so I'll leave things there.

Nagorak,
Nagorak said:
The point is: 3DMark is just an example of the IHVs hardware and driver coding ability.

Coding ability for the goal of generating big numbers in 3dmark. You think that goal is fine, I do not. To expand:

Although time spent optimizing for 3DMark may to some extent be a "waste", the same results there should apply across the board to other games since IHVs are known to optimize for all major games and hopefully provide minor developers the information to optimize their games for their cards.

? What are you basing this "should" on? I have no complaint about any optimization that affects games as well as 3dmark, why would I?

Saying that the card at the top of 3DMark isn't the fastest is just picking nits. Nothing in this world is black and white.

Eh? So if a GF3 Ti 200 was above a 9700 on 3dmark that wouldn't be a problem with the benchmark? No, wait, the 9700 is popular and ATI has clout, so you don't mean that. Perhaps you are referring to the Parhelia and its placement not being a problem, because as far you are concerned you don't like the card. It gives me a bit of a twilight zone feeling to have to point out that is a popularity contest and not benchmarking.
If you are just agreeing with my post you could quote my response to you before and just say "Yes". Though I'd still like a pointer in the direction of some info regarding your comments on the Kyro series cards.

Since you don't mention who you are replying to, it would be helpful if you used some representative text to respond to so your statements yielded more information. The above response is based on my best guess as to what you meant.

The problem is complexity is always balanced against ease of use. You can't make a benchmark that is both uber-complex and also user friendly.

How difficult something is to program does not speak to how difficult it is to use. In fact, difficulty in programming is often directly related to making something easier to use. And what I'm talking about is for the makers of the benchmark to expend the effort, not the user...since they are the ones making the benchmark. Is reading more numbers and having tools to facilitate image quality comparison "uber-complex"? If you assert that, could you provide your reasoning?
For example, image quality comparison can already be done, but spending the effort to focus on facilitating that for a benchmark would make it easier for the user to do so. It would also make the benchmark more meaningful. Your statements read as if you are asserting this is not the case (or that atleast is my guess, they are non-specific so you could mean something else).
It would be nice if 3dmark had image quality equivalancy profiling or an introduced image quality databases as part of their service. There are several approaches that could be tried...since they make money off of this, to me it seems like this is something they should be expending effort towards if they are really trying to put out a quality product for their stated goals (benchmarking).

Could you clarify please, without undefined hyperbole like "uber-complex" and perhaps some reference to the discussion for a clearer picture of what you are addressing specifically, if my understanding of your comments is flawed?
 
Chalnoth said:
Only problem is, that's with FAA enabled, which is buggy. I'd take comprehensive 2x AA any day over 16x FAA that misses some edges.

This coming from a person that does not mind that MSAA has known areas were it wont do any AA. Sorry this logic does not make any sense pot..kettle..black here.
 
demalion said:
Strange how your criteria, which you've also stated before this, for a good benchmark is solely based on the entertainment value. Or maybe not so strange, it seems Futuremark has a similar set of priorities.

Being entertained is absolutely a requirement, if what is being shown is not entertaining then it is not representing what the whole point of the exercise is about, better looking graphics at good frame rates. How well balanced a video card's architecture is, is more important to test than anything else. Isolating hotspots abstractly has it's place as a troubleshooting tool, but it is meaningless outside of the actual conditions it is designed for. That is why there is a fillrate dependency in something like the nature test, it's most likely how it will be used in games. I'm sure they could have tried to make it even more about shaders and have 5fps being displayed, but that's not realistic. Knowig how fast a torus spins doesn't tell you much about t&l performance in actual use so they now use rotating dragons, it has somewhat more realism by adding some fillrate to the equation with a more dynamic view of objects. The whole point of all the shader business is better looking graphics on the screen while still being playable, if you are testing that, it will automatically end up being eye candy.

I don't have a problem with abstract tests isolating performance bottlenecks, they can be as boring as you like, but you know benchmarks that are not entertaining overall will get no exposure and hence won't be targetted for optimization by vendors of video cards, primarily because they just won't have heard of them. Optimization, not hacking in shortcuts for marketing because the product sucks. (I think ATI and NVIDIA have enough pride in their own work not to resort to that, despite the default cynical assumptions routinely made) I don't think it's necessarily a nefarious and cynical thing, people use the best tools available, including developers, and there are very few quality ones like 3dmark for game like testing. Quality, well let's say, by far the best out of a poor bunch. You think optimizing for specviewperf is going to give you faster games? Games themselves are really too random for good overall testing, other than testing the game itself. Vendors do optimize for games, obviously, but there is a priority of games, they can't all be tested, all at the same time, what works around one game bug, breaks another game bug, and they can only have dialogues with so many people to discuss issues. 3dmark is a consistant test point, it's small and quick and you don't have to edit a million config files, not to mention it actually uses some of the features of the APIs they have to support.

Like I said before, the more benchmarks the better, make them all so good they will be popular in reviews, they should strive to be as consistant and well thought out as 3dmark, if they can do better, fabulous. Use the fact that vendors will try to make them run better, have a bug in your code due to some driver, create a popular benchmark and it will get addressed. :devilish:
 
jb said:
This coming from a person that does not mind that MSAA has known areas were it wont do any AA. Sorry this logic does not make any sense pot..kettle..black here.

And yes, I think there's a huge difference. The areas where MSAA does not do FSAA are known, predictable, and avoidable. The areas where FAA fails are not.
 
True,

but I thought Matrox was able to fix some of the issues we saw in the first reviews with FAA...hmm Type you want to comment about this?
 
Just in time for the Geforce Fx release..........

Yup...(conspiracy theory hat on)....

Why do I get the feeling that the "3D Mark 03" will be released simulataneously with the lifting of GeForceFX review NDAs? And remarkably, the GeforceFX reviewers will have had access to 3D Mark 03 a couple weeks prior so that they could include those benchmarks with the first GeForceFX reviews?
 
Note: The 3DMark03 Teaser uses Windows Media 9 video and audio codecs, and therefore it does not work with older media players, such as Windows Media Player 6.4. It is recommended (not required) to upgrade to Windows Media Player 9:


It works fine with WMP-8. It'll just download and install the codec on the fly.
 
Joe DeFuria said:
Why do I get the feeling that the "3D Mark 03" will be released simulataneously with the lifting of GeForceFX review NDAs?

twimtbp.jpg


:p
 
KnightBreed said:
Note: The 3DMark03 Teaser uses Windows Media 9 video and audio codecs, and therefore it does not work with older media players, such as Windows Media Player 6.4. It is recommended (not required) to upgrade to Windows Media Player 9:
Nuts to that.

Bah, the teaser sucks anyway. It's just a hallway with neat lighting.
 
Yeah but the lighting is AWESOME.

And BTW, I just ran the teaser on WMP 7. Don't worry about the requirements.
 
You'd have to put the knife to my throat before I install WMP 7 or later (or Quicktime for Windows, or Real Player). I can't imagine any teaser worth it.

Basic - Peoples Front Against Bloated Interfaces.
 
Status
Not open for further replies.
Back
Top