Anandtech UT2003 Benchies and the ATI 8500

jb

Veteran
http://www.anandtech.com/video/showdoc.html?i=1608&p=5


Hmmm taken a closer look at the 8500 performance and it just does not add up.

800x600X32 = 40.9
1024x768x32 = 21.0
1280x1024x32 = 20.6


Now I dont want to claim a driver bug but does anybody here have a vaild reason why the 8500 would loose 1/2 its FPS score from 800x600 vrs 1024x768? Then why does it lose only .4 going to the next highest resolution? The 8500LE is in the same boat. Notice all the other cards dont show this effect. I have yet to see any other game (even the codemark creatutes scales well on the 8500) show this type of behavior. I just can not see it being fill rate or MB limited. And I doubt its hitting a TnL bottle neck. Its almost looking like its hitting another bottle neck that should not be there. Again .4 fps is just way to low going from 1024 to the 1200 res. In that type of case it should have fallen lower than the GF cards as the 8500 does not have that efficent bandwidth saving stuff.

So anyone have another expliantion other than a driver bug?
 
You can't really compare the two Anandtech articles (as per Doomtrooper's links above) - different UPT builds. Also, discounting all other cards, there's a difference in NVIDIA driver versions used.

I'm just guessing that it must be something to do with AGP or system memory allocation options.
 
Look at Jedi Knight 2 in the same article - the LE does 99.8fps at 1024x768, slower than the GF4 Ti, slower than the GF3s, slower than the GF4 MXs. At 1280x1024 it does the same 99.8fps, now only the GF4 Ti is faster. At 800x600 the LE did only 99.3fps, and the 64mb Radeon was also slower than at the next step up. Maybe CPU limitations and the 128mb of memory explain all this, maybe not.

But regarding Unreal2, the Radeon has gone from 51.7fps in build 848 at 1024x768 to 21fps in build 918 here. The Ti 200 has dipped some as well, from 36 to 29.3, but that's ridiculous. My theory is that nVidia got to Sweeney and Co., laid out a little "incentive", and now suddenly the ATi hardware is fully "optimised". As the articles says, using this test "helps the guys at Epic during the development stage as they can work out driver bugs with the hardware vendors", or something along those lines...
 
Jedi Knight 2 : Jedi Outcast is an interesting benchmark. Until you enable FSAA and anisotropic filtering the benchmark doesn't in fact slow down much if at all in various resolutions . Take my runs on a Geforce4 TI4600 and Ti4400 on a 1800+ Athlon XP + nForce+256MB Ram sound disabled , High Quality settings (aniso turned off when appropiate in the graphics menu):

Ti4400 JK2 JO
============
1024x768
no FSAA 95.5
2X aniso 95.4
4x aniso 94.4
8x ansio 93.1


2x FSAA 95.2
2x aniso 94.8
4x aniso 93.1
8x aniso 91.2

4x FSAA 85.7
2x aniso 83.4
4x aniso 78.2
8x aniso 75.0

1280x1024 95.6
2x aniso 83.1
4x aniso 72.6
8x aniso 68.3

2x FSAA 91.9
2x aniso 76.6
4x aniso 67.4
8x aniso 63.3

4x FSAA 51.1
2x aniso 47.2
4x aniso 44.9
8x aniso 43.2


Ti4600
1024x768
no FSAA 95.6
2x aniso 95.6
4x aniso 95.0
8x aniso 94.7

2x FSAA 95.5
2x aniso 95.2
4x aniso 94.5
8x aniso 93.4

4x FSAA 92.5
2 x aniso 89.6
4x aniso 84.5
8x aniso 81.7

1280x1024
no FSAA 95.5
2x aniso 87.8
4x aniso 77.9
8x aniso 73.4

2x FSAA 94.8
2x aniso 83.6
4x aniso 73.9
8x aniso 69.6

4x FSAA 60.0
2x aniso 55.7
4x aniso 52.4
8x aniso 50.2
 
In the review, Anand mentions that ATi's scores dropped due to a driver problem:

Earlier we mentioned the performance anomalies at higher resolutions with the ATI cards and here you'll see exactly what we're talking about. Using the latest drivers just released last week the performance on all of the ATI cards is hurt pretty significantly here at higher resolutions. Even the GeForce4 MX 460 is able to pull ahead of the Radeon 8500 and the Radeon 8500LE. We'll be working with Epic and ATI to see if we can resolve these issues.

Luckily there are no visual artifacts by any of the cards/drivers.
 
Tagrineth said:
I think in their first comparison they mentioned some flicker problems on the 8500.


Yes they mention a fog flicker issue that was resolved in the next test, which was the Geforce 4 review.

Link Here:
http://anandtech.com/video/showdoc.html?i=1583&p=12

In this review the tables seem to turn on the 8500, from the 1st build the Radeon 8500 is gettingc 51 fps @ 1024 x 768.
The link above the 8500 is actually faster @ 58 fps yet they state the 8500 is slower with the fog fixed...still confuses me on that comment.
This is also where all the Nvidia cards jump 30-40% in performance, be it the build or driver set they used it seems Unreal 2 is gonna run better on Nvidia hardware.
Hopefully one of the ATI guys can give us a answer why the 8500 is losing to a Geforce 4 MX :-?
 
What I find funny is that Anand recommens the TI4200 128MB over the 64MB. Their reasoning is that you could potentially OC the 128MB version to match the memory speed of the 64MB version and that eventually "only" having 64MB will hurt you.

Hogwash. Under a few narrow circumstances. By the time you need more than 64MB for games 2-3 generations of video cards will have come and gone. Yes, the 128MB may be usefull if you want to run at 1600x1200 with FSAA but come on, lets get real.

I wonder if the reason why they didn't include the 64MB version was because it out-performs the 128MB version in 90% of the tests--which would tend to contradict their recomendation.

99% of the cards out there have 64MB or less and I would bet 95% of the cards to be purchased will continue to have 64MB or less.

I have yet to see a single real reason why having more than 64MB is so important as to go backwards in performance in everything we can play today (comparing the two flavors of the TI4200).

My guess is that in the next 12-18 months we might see one or two games that under certain "high-quality" settings with FSAA would like more than 64MB but by then we'll be drooling over much newer cards anyway.

Steve
 
assuming bandwidth is still more of an issue than raw memory size, would there be some way of dividing up the 128 meg similar to that of how 3dfx did with vsa100, or is that really only beneficial when used with multichip SLI.. rather than say.. offering dedicated memory to a select number of pipes each.. it seemed amazing how easily one could output such high bandwidth numbers simply by scaling the ram like they did..
 
Windfire said:
...
By the time you need more than 64MB for games 2-3 generations of video cards will have come and gone. Yes, the 128MB may be usefull if you want to run at 1600x1200 with FSAA but come on, lets get real.
...
My guess is that in the next 12-18 months we might see one or two games that under certain "high-quality" settings with FSAA would like more than 64MB but by then we'll be drooling over much newer cards anyway.

Have you tried Jedi Knight II: Jedi Outcast? On my gforce2 64mb card I can set all detail levels to max and it still runs smoothly, as long as I only use 16bit textures. If I switch to 32bit textures some parts of it becomes totally unplayable. And I've read that Soldier of Fortune II would also require a 128mb card for max texture sizes...
 
Thowllly, yes, and on my 32MB Geforce2 GTS. Works great--and in 32bit color. Are you using texture compression? What resolution? With FSAA?

There are two choices:

1 - The Quake3 based game, with compression indeed needs more than 32MB, even more than 64MB video memory...

2 - There is something else going on here on your system

Option <2> is a lot more likely. I doubt any major game company would require more than 32MB in a game today with 32bit color. That is financial suicide.

We're back to the least-common-denominator here. With so many sub-32MB cards and with Geforce2 MX as the realistic target these days, companies that want to make $$$ shoot for full playability on mid-range systems.
 
People with 64MB Ti500's are reporting occasional stutters in certain RtCW maps, so 128MB may be prudent. The [H] JK2 benches are pretty persuasive, too. Though the 64MB may beat the 128MB in most benches, it isn't by a very significant margin. I'd gladly trade 10% in memory speed for no immense fps dips.

Reverend said:
I see nothing in that quote that indicates it's a driver problem. Hopefully I'm wrong.
Yeah, I'm assuming it's yet another driver problem. It seems like almost every AnandTech review mentions the latestdrivers fixing something but breaking something else.
 
People with 64MB Ti500's are reporting occasional stutters in certain RtCW maps, so 128MB may be prudent.

I;ve just finished benching RtCW on the Ti 4200 (64MB) and there are some odd stutters - however it appears to get worse the second time a demo is loaded. It almost as though the textures are not getting cleared down from the first run and its trying to reload them all again (it takes a horrible amount of time to load the second time) - I'd say that this particluar issue is a driver glitch rather than a 64MB issue though.
 
Pete, you bring up a good point. Frame rate stability is very important--one of the things I really liked about 3dfx when it was king--it may not have had as high a framerate but it was smooth all the time.

Heck, even with the slower RAM on the 128MB version, the board equals or surpasses the GF3 TI500 in terms of performance.

With this in mind, I don't think you could go wrong with either board.
 
I'd be real interested to see some Serious Sam (FE and SE) ProfileStats with a 1 second interval graphed on the various cards (DumpProfileStats() if I remember correctly).

I'm undecided on the UT2 demo benchmarks. It's impossible to speculate what might be afoot with these benchmarks as Anand has no commentary or extra information. Most normal sources would question the massive variance, and silence I think speaks louder than such a comprehensive look at the scores. There is obviously something afoot, or a grievous error somewhere as those scores make absolutely no sense at all.

There is nothing cryptic, unreasonable or unsual about 3D graphics for the savvy. I believe the source/cause of this rather curious behavior would be fairly easy to identify by just about any other source- be it a driver bug, human error or some sort of coding issue in the demo that creates platform favoritism...
 
First of all can someone possibly explain why the UPT test hasn't been publisized yet? Or do the developers underestimate the average user as to not being able to run it by himself and reach a score and get a pretaste?

Speaking of pretaste: I fell for the scores at their very first appearance as to being indicative as to how UT2003 or any other U2 deriving game will react this year. At this point of time I doubt we will see in at least this year's games using that very same game engine that high poly counts as shown in the test.

If I'm even close in the above two assumptions then I consider releasing performance values from such a test not exactly as being so "innocent" after all, because it can eventually mislead the public by a lot in more than one occassions. Someone might say that Anand has pointed out that it's to be considered as an extreme stresstest along the lines, but it apparently wasn't made clear enough. And I don't like seeing benchmark numbers from it constantly reappearing when I as a user cannot in fact d/l and verify the results.
 
Ailuros said:
First of all can someone possibly explain why the UPT test hasn't been publisized yet? Or do the developers underestimate the average user as to not being able to run it by himself and reach a score and get a pretaste?

Hi Ailuros,

I will take a stab at this and suggest that its because they have not finalized the engine yet and giving out the test now, could very welll make it harder to compare scores for cards today vrs a few months from now when the UT2003 is released to the public. I know they were trying to integrate some more stuff into their latest build (perhaps the Karma Physics code). Now while tweaking the engine here and there, it could lead to a difference in performance and thus provide no way to compare numbers. Thus I don't think publicly displaying numbers on non-final code is a good idea. Further more, I would want to see benchmarks from actually gameplay. Yes stress test are all fine and what not, but its not the same thing. I guess I would like to see both a in game and extreme test. But wait till its final.

BTW Ailuros, did you happen to see recently that a UT mod team released their long awaited update including a map that you help tested? hehehehehe
 
Back
Top