Nvidia Against 3D Mark 2003

mczak said:
ET said:
Really? I haven't seen how results change with the resolution so my speculation may of course be wrong. Does changing from 1024x768 to 1280x1024 drop the score significantly on any card?
Here are some nice numbers: http://www.tech-report.com/etc/2003q1/3dmark03/index.x?pg=3
So, the benchmarks don't appear to be completely vertex bound (except maybe Test1 on the GF4MX which doesn't have vertex shaders and is thus maybe cpu limited?), but at least it looks like vertex shader performance indeed is somewhat important.

I stand corrected. Seems like FutureMark did a good job of balancing.

On second thought, for cards with under 128MB, part of the reason for lower frame rates could be the textures and perhaps vertex data being pushed into AGP as the resolution gets higher.
 
Just one question to those thinking of "forward looking" of this partituliar bench.

For the best VC, 18 month ago (GF3 TI?), is there, right now, one game in which this particular VC can't perform at 1024*768 on the current plateform at let say 20 FPS?

If not, then i think 3DMark is not forward looking, it's science fiction :D
 
For the best VC, 18 month ago (GF3 TI?), is there, right now, one game in which this particular VC can't perform at 1024*768 on the current plateform at let say 20 FPS?

I'd guess that the latest Unreal engine games come pretty close. I know on my Radeon 8500, I have to turn down image quality and physics to get it to run acceptably at 1024x768.

However, it's an apples to oranges comparison. Because we're talking about "next gen" games making significant use of pixel shaders, not just "more polys and more textures." In other words, the next gen games are likely to be a step change in GPU dependency. The only question is, when they will arrive. Doom3 should be one of the first ones.
 
Joe DeFuria said:
For the best VC, 18 month ago (GF3 TI?), is there, right now, one game in which this particular VC can't perform at 1024*768 on the current plateform at let say 20 FPS?

I'd guess that the latest Unreal engine games come pretty close. I know on my Radeon 8500, I have to turn down image quality and physics to get it to run acceptably at 1024x768.

However, it's an apples to oranges comparison. Because we're talking about "next gen" games making significant use of pixel shaders, not just "more polys and more textures." In other words, the next gen games are likely to be a step change in GPU dependency. The only question is, when they will arrive. Doom3 should be one of the first ones.
Well, I think that no games will be out in 18 month making a 9700 pro working at 1024*768 without AA and AF at 20 FPS. So, yes, everyone can say "forward looking", but nobody can say how far ahead ;)
ut.gif
 
Evildeus,

I noticed you chose the UT "flyby" benchmark, which is by design NOT indicative of GAMEPLAY performance, which is what 3DMark is specifically shooting for.

Try finding some "botmatch" benchmarks...

So, yes, everyone can say "forward looking", but nobody can say how far ahead

AGREED! AND THE FOLLOWING IS IMPORTANT!

THIS is what individual reviewers should take into account when using the 3DMark score. They need to

1)understand that it represents the "ability to play future games" (and ability means performance / features, not just performance).

2) At the same time, they need to make their own estimate of when such games will start to appear...so that they can incorporate a combination of the 3D Mark score along with "current actual game" benchmarks, to reach an overall conclusion about the value of a product....
 
Joe DeFuria said:
Try finding some "botmatch" benchmarks...

I 'll try

Find that ;)
Athlon XP 1800+, 512 DDR, GF3Ti200
Flyby: 84.446106
Botmatch 41.923172
http://www.gamerz.be/forum/viewtopic.php/t-9158/start-0.html

Joe DeFuria said:
THIS is what individual reviewers should take into account when using the 3DMark score. They need to

1)understand that it represents the "ability to play future games" (and ability means performance / features, not just performance).

2) At the same time, they need to make their own estimate of when such games will start to appear...so that they can incorporate a combination of the 3D Mark score along with "current actual game" benchmarks, to reach an overall conclusion about the value of a product....
1- The problem is introducing "features" as you can't say how much does it count for the final score. Plus, the bench is no more usefull for comparison.

2- Want my guess? 2008 :)
 
Find that ;)

Yes, so like I said, not too far away from how the best 3D card today runs the most advanced test in 3DMark'03. (About 30 FPS).

1- The problem is introducing "features" as you can't say how much does it count for the final score.

Correct, you must make an assumption. (As always when trying to make predictions!). 3DMark documented their methodology, and again, seems a perfectly reasonable methodology to use. (That can be argued, of course....but I don't see anyone making those arguments!)

Plus, the bench is no more usefull for comparison.

? Don't follow you there. It's not useful for a direct "which card is the fastest" comparison, but again, that's not the point.

2- Want my guess? 2008

Ok, fine. ;)

I could respect a decision to "not consider 3DMark03" scores at all, if you are of the position that no games will ever look like that or use "global" pixel / vertex shading until 2008. I would disagree with that, however.
 
What this all comes down to in the end is that Nvidia has lost on all counts with its Nv30 design. It seems to me that everything they have done latley has been a response to the R300. Lets go back in time; Nv30 rumours suggest 300-400 Mghz, ships @ 500 but needs crazy cooling to do it. Rumours 400Mghz ddr11 ram, ships @ 500 Mghz, same cooling problem. Extolls the virtue of the Cinefx Archutecture (help me with my spelling here Walt C ) by having 32 precision for life like rendering. On Paper this card sounds to be the best thing in the world. In reality it blows (no pun intended). The card is crippled with its 128 bit bus. Its 32 bit shader performance is to slow to be practicle. It is too loud. It has questionable FSAA and Ansio. image quality. Now here comes a bench mark that clearly shows this card in its true light. Look at the PS/VS 2.0 performance difference. The FX ultra certainly does not look good here and IMO I think this is the true reason that Nvidia is trying to discredit 3DMARK03. All this talk about using 1.4 v. 1.4 is to distract from the NV30's most glaring shortcommings. Having said all this, [H] does have a piont in saying that it is more important to optimize for games as opposed for bench marks but if some games are optimized for certain GPU's or drivers, then how do they provide any clearer picture of a card true performance. If 3DMARK (any version) can provide an unbaised means to help evaluate video card performance then great. If you think that there are problems, suggest changes. Do not discredit the merits of such an apporach though.

Sorry for the long post and the poor spelling.
Glen
 
Joe DeFuria said:
Find that ;)
Plus, the bench is no more usefull for comparison.

? Don't follow you there. It's not useful for a direct "which card is the fastest" comparison, but again, that's not the point.

2- Want my guess? 2008

Ok, fine. ;)

I could respect a decision to "not consider 3DMark03" scores at all, if you are of the position that no games will ever look like that or use "global" pixel / vertex shading until 2008. I would disagree with that, however.
1- Well if it's not direct and i need to go to the white paper to be able to compare... not really usefull, isn't it?

2- No, i think we will find games using VS/PS 2.0 long before, but won't find a game that will makes the 9700 pro performe at 25-30 FPS at 1024 before 2008 ;)
 
Joe DeFuria said:
Try finding some "botmatch" benchmarks...
There are two problems with posting botmatch benches from UT2k3:

1. They're mostly CPU-bound, making them less useful for a video card review (though great for a CPU review).
2. They're not nearly as reliable in framerate as the flyby scores, meaning that you really need to take at least 3-5 runs at each setting to get an accurate score (if not more...haven't done a statistical analysis on this), something which few sites seem to do.

Of course, the best performance benchmark would be to just record average frames over a few specific levels for a number of hours at each setting, doing some sort of statistical analysis to ensure that each test has a small standard deviation. But is anybody going to do this? Probably not.
 
1. They're mostly CPU-bound, making them less useful for a video card review (though great for a CPU review).

That doesn't matter! All that means is that actual gameplay is heavily CPU bound as well. We're not looking for benchmarks to purposely stress the GPU. We're looking for benchmarks that mimic "actual game" situations.

And today, especially with the latest video cards, we all expect many games to be mostly CPU limited unless you start turning on AA and running at 1600x1200 resolution...

Of course, the best performance benchmark would be to just record average frames over a few specific levels for a number of hours at each setting, doing some sort of statistical analysis to ensure that each test has a small standard deviation. But is anybody going to do this? Probably not.

Agreed on all accounts. ;)
 
Hi there,
Joe DeFuria said:
That doesn't matter! All that means is that actual gameplay is heavily CPU bound as well. We're not looking for benchmarks to purposely stress the GPU. We're looking for benchmarks that mimic "actual game" situations.
But the supplied botmatches are not actual game situations, unless all you do is play botmatches.

We recorded a couple of MP demos with UT2003 (should be online soon), and these demos are far more VGA bound than CPU bound. The botmatch scores are "closer" to the measured in-game performance than the fly-bys, yes, but still not an indication of actual gameplay performance. Again, unless all you do is play botmatches with the AI cranked up to the max (as the benchmark does).

ta,
-Sascha.rb
 
NVIDIA
"One thing we did like about 3DMark03 is it disallows the user from disabling the logo screens between tests. This affords us a completely seamless way to trigger benchmark specific optimizations and with no way to get caught in the act, so to speak."

Okay, so it's not a real quote, but surely made you look twice. :)

Perhaps if NVIDIA did look long and hard enough at 3DMark03, they might find some good things to say about it concerning their cause. :)
 
But the supplied botmatches are not actual game situations, unless all you do is play botmatches.

True, but the point is, it IS an actual game situation! The Fly-by is not like any game situation at all. The purpose of the fly-by is to make the benchmark purposely more GPU limited than the bot-match, in oder to make differences in GPU performance more visible.

As for on-line recorded demos...yes, they will be more indicative of on-line play without bots. Another type of game situation. However, "plying back" such demos removes any frame rate issues related to networking...they aren't there. So I suspect demo recordings will also show higher FPS numbers than you are likely to see when playing on-line over especially dial-up connections.

In any case, UT3K3 did not ship with the on-line demo recording facility, so all that was available was fly-by and bot-match. ;)

And ANY web-site that uses "FLY-BYS" in their benchmarks because they stress the GPU, and then turns around and talks about how other benchmarks aren't valiud because they "synthetic and Aren't like games" is completely hypocritical. UT2K3 Fly-By tests are in fact synthetic.
 
Seem there is quite a bit of smoke clouding the issue. The question was "For the best VC, 18 month ago (GF3 TI?), is there, right now, one game in which this particular VC can't perform at 1024*768 on the current plateform at let say 20 FPS?"
First, flybys don't factor in to this particular question (unless you play flyby's). Second bot-matches are more indicative of some game situations, but not game play in general. Third, the real question in this thread in whether 3dmark2k3 is a viable tool is "can it test 3d scenarios that current or soon to be released games may/will exhibit". I would imagine that answer to be yes, and that both flyby's (for general gpu performance) and bot matches (for how well the card assimilates raw cpu data) should be tested, as that knowlege is often indicative of expected performance, though not actual performance. Also only simulated gameplay can ever be tested in any benchmark or review, since no one can play a game exactly the same way twice and record the scores. This is what tools we have to work with so we use them to make buying decisions.
Since there are games coming out with ps1.4 and ps 2.0 in the futre, then there's no problem using benchmarks that test this (unless some of us don't want to know what performs best).
 
Unless a certain company has NO DX9 part on the market presently, the lower end products being released by them may not be DX9 compliant and previous generations of hardware never supported the highest level of support for DX 8.1 and downplayed it

http://www.tech-report.com/etc/2001q4/kirk/index.x?pg=1

Q:Why is it that the GeForce3 runs specific tests in 3DMark so well, most notably the Nature scene, when compared to other cards?

A:GeForce3 is the only hardware that implements all of the DX8 vertex and pixel shading instructions in hardware at full speed.

Q:There has been some public debate as to whether or not GeForce3-based products are DirectX 8.1 cards. What is the truth?

A:GeForce3 cards are DX8.1 cards, period. NVIDIA has Microsoft WHQL certified DX8.1 drivers for GeForce3. I believe that the new pixel shader versions introduced in DX8.1 do not offer significant new functionality over the original DX8 pixel shader versions; they are simply different. I'm not really sure what the value of these new versions really is. We certainly don't hear a lot of interest in them from game developers.

Now who's fault is it now if theyt didn't feel PS 1.4 wasn't worth implementing into Geforce 4's.
 
Doomtrooper said:
Unless a certain company has NO DX9 part on the market presently, the lower end products being released by them may not be DX9 compliant and previous generations of hardware never supported the highest level of support for DX 8.1 and downplayed it.
Be real.
Who could have been that short sighted??? That alone could force any company to withdrawl support from a benchmark that supported those features...I just can't see that happening to a conscientious graphics card company.
 
Back
Top