UT 2003 GPU Shootout Part II at Anandtech

OH! Sorry :oops: !

I thought it was the frist UT 2003 Anandtech article! I didn't realize that there is already a CPU-scaling Test online!

Interesing..... :)

CU ActionNews
 
I thought it a little ammusing that his conclusion was pretty close to our conclusion in the recent P4 test.
 
What gets me is why is the performance so low on an 8500? You see that it looses to a GF3 across the board. I know how Epic worked with nVidia so is there any other logical reason why this is happening? We know that the UT2003 is powered by a more or less DX7 engine so some of the advanced features of the 8500 are not being used. Could it be that the game is just not taking advantage of the 8500s texture capabilities as it is a heavily multitexured game?
 
UT2003, actually isn't that multitextured. The 8500 should be able to handle it quite well.

As for why the Nvidia cards run faster. Well I think Epic "sold out" to Nvidia, if you look at the US Army game, which is rather simillar to UT2003 in terms of the rendering engine, it has a BIG Nvidia logo before you start the game. Reading, Nvidia, the way it's meant to be played. or something like that.

I'm pretty, certian, Epic did a fair bit of Nvidia specific optimizations or haven't reached the point where they've done Nvidia based optimizations. Simillarly, Bioware's MDK2 T&L engine was quite fast on Nvidia cards and not the Radeon because it was designed and tested soley on Nvidia cards.

Arguements for ATI's poor developer relations would be moronic, considering, the Under developed... Uncool... I mean Unreal engine is a pretty big deal and thus ATI would be spending as much time with Tim and the gang as possible.
 
UT2003 and NV / ATi hardware, optimisations

Hi there,

Saem said:
As for why the Nvidia cards run faster. Well I think Epic "sold out" to Nvidia, if you look at the US Army game, which is rather simillar to UT2003 in terms of the rendering engine, it has a BIG Nvidia logo before you start the game. Reading, Nvidia, the way it's meant to be played. or something like that.

I'm pretty, certian, Epic did a fair bit of Nvidia specific optimizations or haven't reached the point where they've done Nvidia based optimizations. Simillarly, Bioware's MDK2 T&L engine was quite fast on Nvidia cards and not the Radeon because it was designed and tested soley on Nvidia cards.

Arguements for ATI's poor developer relations would be moronic, considering, the Under developed... Uncool... I mean Unreal engine is a pretty big deal and thus ATI would be spending as much time with Tim and the gang as possible.

The same questions regarding "optimisations" for NV hardware in the latest build of the Unreal Engine crept up in our forums over at 3DC. You could tell Epic's Daniel Vogel was . . . irritated about such implications and claims:

Ich verstehe nicht ganz wie man "auf NVIDIA" optimieren kann ohne gleichzeitig "auf ATI" zu optimieren... auf jeden Fall mit D3D. Es ist ja jetzt nicht so als ob die Karten grob unterschiedliche Features haetten oder Performancecharakteristiken. ATI, NVIDIA, ... haben immer die neuesten builds und source code.

Feedback von IHVs wird beruecksichtigt wo es geht und deshalb geht mir immer der Hut hoch wenn jemand uns unterstellt, dass wir speziell fuer einen Hersteller optimieren. Wenn 'ne Radeon 8500 schneller gewesen waere haette ich mir bestimmt anhoeren muessen, dass wir auf ATI optimieren und bla bla bla :)

<< I don't see how you are supposed to optimise "for NVIDIA" without optimising code "for ATI" at the same time . . . at any rate using the D3D API. It's not as if the cards had completely different featuresets or performance characteristica. ATI, NVIDIA . . . both always have the latest build and source code [of the engine] put at their disposal.

Feedback from IHVs is considered whenever possible and that's why it really pisses me off every time I hear that we're specifically optimising for one vendor or another. If the Radeon8500 had been faster, surely people would tell me that we're obviously optimising for ATI and blablabla :) >>

It's a rather free translation on my part, so please don't sue me. ;)

http://www.forum-3dcenter.org/vbull...readid=21967&perpage=20&pagenumber=27

Regarding multitexturing in UT2003: the game uses three texture layers at high settings (base, lightmap, detail) as well as pixel shaders to speed up texture operations, if available. There will be no "special" pixel shader effects, though, everything can be done on traditional texture stages--as long as cube maps are supported, that is.

ta,
-Sascha.rb

P.S. I am only quoting Daniel's statement for information's sake, not as "my" argument for or against the points made in this thread by others.rb
 
I don't know whether Epic sold out or Anandtech did. For this test he used the DM-Asbestos test, in which the Radeon 8500 performed below the Geforce3. The DM-Antalus tests was the other way around, though. DM-Antalus seems to be more stressful on the video card, however, so I guess it doesn't suit the CPU scaling aspect of this article. I guess Radeon owners can console in the fact that they are good in high stress situations.
 
I already got a UT2003 Tshirt with a big nV logo on it. But I still dont they went out of their way to favor anyone. At least thats not the impression I got after talking to them. Again I just dont understand the big difference. Even if it was optimized I would not think that the 8500 would score so "low"
 
I don't think that the 8500 is optimized any more than the gf3 but I am puzzled about the gf4s huge advantage in the high detail tests. You'd expect that the gf3 would also get some advantage because of the similar architechture.

Being quite cynical here but maybe this is an attempt by nvidia to get people to buy gf4s especially with the soon due R300.

I'm also dissapointed that antalus was not tested. In this map and Flux II on my system (1073MHz tbird/143FSB, 384Mb SDR RAM and 64Mb 8500) it makes no difference to the speed if I set low or max details. I don't atually know how to put a framerate counter up but the looking across these levels is just as slow (it feels around 20fps possibly less) whether I'm at 1024x768 max or 640x480 low details.
 
Hi Bambers,

when benching, please keep in mind that the leaked UT2003 versions are in no way conclusive regarding performance of the shipping product--or the current build of the Unreal Engine, for that matter. (Anand used a newer build, too, IIRC.) according to Daniel Vogel, a LOT has changed, performance-wise, from the last leak to the current version, especially regarding CPU usage.

ta,
.rb
 
Yea I did know that it is only an early leak and I'm not judging anything by this at all except that antalus looks more like something seen in a tech demo and its nice to see this level of detail in a game, especially an fps :).

I just thought that due to the high vertex count that antalus had it would be an interesting map for a cpu scaling test.
 
Yeah, the leak 927 build looks freakin' sweet on my GeForce3. Runs pretty decently, as long as there aren't too many bots onscreen at once. The geometry is pretty serious, and the texture detail with the layered effects is solid as well.

But you should see it on Kyro. Looks and runs terribly :(
 
I second LittlePenny, crying "sell out" and bringing out one X-File after the other is a rather pathetic way to deal with such results. I seriously doubt that either Epic or Anand have gone out of their way to make Nvidia look good.

Daniel Vogel from Epic specifically stated at 3DCenter.de Forums that he worked closely with Anand to make sure the test reflects the true game performance as closely as possible, and that they are trying to get every important card running as good as they can. Besides revising the code to allow it to run on TNTs and Voodoo3s, the latest optimisations for Kyro should be best proof of that, considering Sweeny's history of anti-tiler sentiments. I personally guess the R8500 has some kind of performance problem with one of the things activated at high detail settings, like detail textures, higher texture resolutions, model complexity or some of the other modifications, it would be really usefull to see a full list of things deactivated for med. detail. It might just be a driver bug, although ATI should have long fixed such an issue considering their level of access and the importance of the engine...
 
Few things:

-From this scalling analysis with high detail unless you have a gf4 the GPU is the total bottleneck with UT2003.

-With GF4 when you double the CPU frequency you get only 1/3 more framerates, then it is still GPU limited.

Using the Amdhal´s law I guesstimate for GF4 when scalling from 800Mhz to 1.6GHz system:

total speedup = 1/ ((1-FractionEnhanced) + (fractionEnhanced/PartialSpeedup))

1.33 = 1/((1-FractionEnhanced)+(FractionEnhanced/2))

FractionEnhanced = 0.5 or 50%

Then at 800x600x32 GF4/Athlon800Mhz half of the time is CPU and half is GPU.

At 1024x768 probably the situation will be 60%GPU and 40% CPU. Now if we scalle the CPU to 1.2GHz (at 1024x768) then the situation will be 70% GPU and 30% CPU.

It was the Asbestos, my guess with Antalus the GF4 at 1024x768 will have totally flat scalling.

Resuming, with +1.2GHz CPUs and 1024x768 high detail the GPU is the bottleneck.

edited: minor parenthesis correction.
 
Back
Top