Is truform on the radeon 9700 working to full potential?

rubank said:
No, I didn´t miss that you´re running an 8500 as opposed to a 9700, but you said HB:s scores are too close to yours taking just that and his faster CPU into consideration.
That´s why I wondered if you, like HB, are running @ 4xFSAA and 16xAF: otherwise your comparison (of your scores vs. HB:s) is invalid.

Just a simple question, deserving a simple answer.

Err...no it is not a "simple question", because he is running at 1024x768x32 on a card with a 256 bit memory interface running at 320 MHz DDR and improved HyperZ, versus mine with 128 bit memory interface at 275 MHz DDR. Therefore, I do include that he is running that in my evaluation that his scores are "too close to mine" for CPU limitation to be a factor when incurring this performance hit for enabling Truform, as I stated. You seem to be trying to imply that I'm mistaken, and I'd ask what your reasoning is?

To answer directly, no, I'm not running at 4X (supersampling, not multisampling, btw) FSAA and 16xAF (bilinear, as opposed to his trilinear, or so I'd presume), and I discount those details as being relevant to what we are observing so far. You mean you do not?
 
Demalion,

it´s not that easy for someone outside to know what you include in your reasoning, if details aren´t given.
Now I got the answer and I thank you for that.

That leaves the question of whether there is something not right with your setup or if this demo/benchmark is in fact more of a CPU/motherboard/memory type/memory timings kind of benchmark.

The fact that my score with an original Radeon32DDR, AthlonXP 1800+, 512 mb DDR hovers around 37 fps (10x7x32 max settings, no AA or AF) seems to indicate the latter.

Given these circumstances the impact of CPU/mb/mem seems somewhat ridiculous for a video card benchmark, I think, since scores from different machines, albeit "equally" setup, would be inadequately relevant.

Or what do you think?

Edit: typo
 
Doing some benchmarks with different physics settings would probably determine if it is a cpu limited benchmark with mid ghz cpu's. I believe that is the case here, cpu limited for those conditions with the Radeon8500. Comparing 4x FSAA and 16x AF with trilinear filtering to no FSAA and 16x with bilinear filtering and assuming a bunch of stuff and then comparing the Radeon 8500 and the Radeon 9700 is not the best way to go about it. Hopefully HB will come back and do some more tests here.
 
rubank said:
Demalion,

it´s not that easy for someone outside to know what you include in your reasoning, if details aren´t given.
Now I got the answer and I thank you for that.

Hmm...I just considered it implied. Perhaps I'm too brief in my posting.

That leaves the question of whether there is something not right with your setup or if this demo/benchmark is in fact more of a CPU/motherboard/memory type/memory timings kind of benchmark.

The fact that my score with an original Radeon32DDR, AthlonXP 1800+, 512 mb DDR hovers around 37 fps (10x7x32 max settings, no AA or AF) seems to indicate the latter.

Heh, in turn if you'd specified you had this in mind instead of mentioning the 9700 this would have been clearer. ;)

Well, I won't say my setup is perfect ;) , but I will mention that one thing you didn't mention is that it should be doing significantly more on my system.

D Vogel has already stated the game scales automatically (though I'm left with a lot of assumptions as to what this means), despite settings, to the hardware. I presume this would be in relation to level of detail polygon count determination and shadow detail (what do shadows look like on your card?), and the like. To address your concern in general, look at some settings from the ut2003.ini file:

SuperHighDetailActors=True (with a dynamic LOD system, I doubt this setting means the same thing for both our cards)
MaxPixelShaderVersion=255 (I assume enabling full pixel shader use...still an odd specification method)

Probably some more that I don't know about.

This is just to illustrate that my card is doing more. This point would be more relevant to your flyby results, though, not a comparison of "in game" or "botmatch" frame rates. I do wonder what your asbestos flyby looks like at the pool of water?

Given these circumstances the impact of CPU/mb/mem seems somewhat ridiculous for a video card benchmark, I think, since scores from different machines, albeit "equally" setup, would be inadequately relevant.

Or what do you think?

Edit: typo

I think the botmatch benchmark and the in game play inside is pretty much completely CPU dependent (for "in game", just "primarily" instead of "completely"), and your CPU is faster than mine, so no real surprises for the lower results there. Run the "benchmark.exe" (EDIT: and look at the "flyby" fps) and compare your results to mine (EDIT: about 76 for the final report). Keep in mind your card is doing less, and I think the results should be more what you would expect?

I think Bambers illustrates clearly that it can be a GPU limited benchmark when tesselation is turned up (the multiplication on the polygon count for each successive level is rather signficant)...I'd realized this (hence why I don't ever end up using Truform > 1 except for maybe DoD or some other older, limited GPU demanding game), but had just assumed everyone was using a Tesselation level of 1 when they didn't answer my query about it. The confusion seems to have stemmed from the apparent (HB and Luminescence haven't replied yet) misconception that Tesselation=1 is the same as having NPatches off.
 
noko said:
Doing some benchmarks with different physics settings would probably determine if it is a cpu limited benchmark with mid ghz cpu's. I believe that is the case here, cpu limited for those conditions with the Radeon8500. Comparing 4x FSAA and 16x AF with trilinear filtering to no FSAA and 16x with bilinear filtering and assuming a bunch of stuff and then comparing the Radeon 8500 and the Radeon 9700 is not the best way to go about it. Hopefully HB will come back and do some more tests here.

I agree for meaningful benchmark comparison, but the initial premise was a concern about why the 9700 had such a "large" performance hit. Bambers, with those 9700 results, seems to show that it doesn't (unless you expect multiplying a portion of the poly count arbitrarily not to have an effect at some point ;) ).
 
demalion said:
. . .but the initial premise was a concern about why the 9700 had such a "large" performance hit.

Maybe because when tested without Truform the FPS was still limited by the cpu, so the limiting component with and without Truform with the Radeon 8500 was the cpu and slow ram. Resulting in a very small difference in FPS between the two tests. If you bumped up the resolution to 1600x1200 and run the same test then your percentages whould be more based on GPU/Card limitations vice CPU limitations. I hope that makes sense. Still more data needs to be obtain before a conclusion is made is what I am saying.
 
Bambers, can you rerun your test but at a much higher resolution? That way we can hopefully minimize CPU restrictions. Also if you turn physics down to low would probably also help in determining how TruForm affects overall performance.
 
Bambers said:
Hmm thought id better point out my results are on an 8500.

Doh! *bad* Bambers. ;) I looked again...you didn't say in the post, right, or am I just blind today?

I guess it's back to waiting for HB and Lum, and back to where we were before your post. :-?
 
Demalion,
you´re not to brief, you just seem to get too little information into all those words.

Here´s an excerpt from my ut2003.ini file, for your convenience, covering the items you mention

[D3DDrv.D3DRenderDevice]
DetailTextures=True
HighDetailActors=True
SuperHighDetailActors=True
UsePrecaching=True
UseTrilinear=False
AdapterNumber=-1
ReduceMouseLag=True
UseTripleBuffering=False
UseHardwareTL=True
UseHardwareVS=True
UseCubemaps=True
DesiredRefreshRate=60
UseCompressedLightmaps=True
UseStencil=False
Use16bit=False
Use16bitTextures=False
MaxPixelShaderVersion=255
UseVSync=False
LevelOfAnisotropy=1
DetailTexMipBias=0.800000
DefaultTexMipBias=-0.500000
UseNPatches=False
TesselationFactor=1.000000
CheckForOverflow=False
DecompressTextures=False
UseXBoxFSAA=False
DescFlags=0
Description=

I don´t see why your card would "do more" with the same settings, it should just do it faster.

Oh, just remebered, you wanted the flyby result: 64, (with the browser running in the background, what effect that might have I don´t know).
Btw, shadows look just fine, and water looks like, err, water.
 
noko said:
Bambers, can you rerun your test but at a much higher resolution? That way we can hopefully minimize CPU restrictions. Also if you turn physics down to low would probably also help in determining how TruForm affects overall performance.

Now that Bambers has mentioned using an 8500, his question about my 23 fps becomes clearer. But why do you want him to run at higher resolution? That will make the limitation bandwidth related, and that won't tell us anything about Truform (atleast on the 8500 the bandwidth should not be hit significantly by Truform, if at all...I'm under the impression the tesselation performs like "on the fly" decompression and the extra polygons don't get written back to video memory just textured and lit/shaded then put back into the memory as pixels/zbuffer values).

To answer Bambers...well, you are referring to the botmatch, and that is CPU bound, though you may be at right at the edge....just look at your higher average barely budging when turning Truform on to a tesselation of 1. The question wasn't whether the CPU was my limiting factor for botmatch, but why it did not seem to be for Hellbinder with a MUCH faster graphics card and why his MUCH faster card on a faster CPU was so close to my score when both of our cards were using Truform.
My conclusions were reached base on his "33.072227" average compared to my "29.778038" average, both with Truform on...regardless of my CPU limitation, his dropping to that from a "58.005371" indicates a problem with a card with that much more geometry processing power than mine.

Noko, I'd think you'd want me to turn DOWN the physics settings and graphical settings to address the issue that this thread is about...and I'll do that in a bit though my evaluation above about the 29 I get and his drop from 58 to 33 (even if he has sound on) is still the same.

Back to my "theories":

1) There is some sort of problem with the performance impact of Truform as exposed on the 9700, atleast if your expectations are based on the 8500. Whether this is because the drivers are treating the 9700 in the same way as the 9000, or some other factor, I'm not sure.

The CPU speed issue doesn't fit because (and I presume HB's With/Without labels are just switched at this time) HB's average results for Truform being activated are just too close to mine for a CPU that much faster...also, they are too close to my average for the performance advantage the 9700 should have. Then again, he doesn't say what physics setting he is using. Too many questions right now to be sure what his figures mean.

2) The game is responding to something about the 9700 and increasing polygon load out of proportion to what it is doing for my card. The net result is that Truform is having a much more significant impact because much more work is being added by enabling it.

3) Bambers made me realize that when my earlier question about tesselation never got answered, presuming it was "1" with no response was probably foolish. Another possibility is that the performance hit they are experiencing is simply from having the Tesselation value too high.
 
rubank said:
Demalion,
you´re not to brief, you just seem to get too little information into all those words.

Here´s an excerpt from my ut2003.ini file, for your convenience, covering the items you mention


UseHardwareVS=True (don't know how I missed that ;) ).
MaxPixelShaderVersion=255

These mean absolutely different things for my card than they do for yours. You are simply not going to see the images I see on my card...well, I suppose they might have been able to pull some sort of magic with lighting and rendering the water with Radeon 7x00 features, but I really do doubt that at this time.

I don´t see why your card would "do more" with the same settings, it should just do it faster.

Well, it has features that your card doesn't have. We can settle that with some screen shots in a bit.

Oh, just remebered, you wanted the flyby result: 64, (with the browser running in the background, what effect that might have I don´t know).
Btw, shadows look just fine, and water looks like, err, water.

Hmm...could provide screenshots? Heh, "just fine" and "looks like water". I do think your shadows could look as good as mine if they pulled some really efficient CPU based lighting routines and all shadows are done using that, but I doubt that and the water rendering at this time. I've added posting some screenshots about what I mean to my list after the benchmark results above.
 
Benchmarks, with physics on "Lowest", with some graphical detail cut down to reduce on the load of things besides geometry load. Note that these are not directly comparable to my other score due to a completely different botmatch occuring due to the changed settings.

Detail settings FALSE said:
DetailTextures=False
HighDetailActors=False
SuperHighDetailActors=Falsee
UsePrecaching=True
UseTrilinear=True
AdapterNumber=-1
ReduceMouseLag=True
UseTripleBuffering=False
UseHardwareTL=True
UseHardwareVS=True
UseCubemaps=False
DesiredRefreshRate=60
UseCompressedLightmaps=True
UseStencil=False
Use16bit=False
Use16bitTextures=False
MaxPixelShaderVersion=255
UseVSync=False
LevelOfAnisotropy=1
DetailTexMipBias=0.800000
DefaultTexMipBias=-0.500000
UseNPatches=False
TesselationFactor=1.000000
CheckForOverflow=False
DecompressTextures=False
UseXBoxFSAA=False
DescFlags=0
Description=

EDIT: Oops, that was a silly omission...I didn't include my results with highdetail actors.

Detail settings TRUE said:
HighDetailActors=True
SuperHighDetailActors=True


.bat file, with sound OFF, Detail settings FALSE
dm-asbestos?spectatoronly=true?numbots=12?quickstart=true -benchmark -seconds=77 -exec=..\Benchmark\Stuff\botmatchexec.txt -nosound

Truform ON
5.215018 / 40.774055 / 78.046669 fps rand[10995]
Score = 40.801422

Truform OFF
6.703735 / 40.650120 / 77.837860 fps rand[10995]
Score = 40.678169

.bat file, with sound OFF, Detail settings TRUE
dm-asbestos?spectatoronly=true?numbots=12?quickstart=true -benchmark -seconds=77 -exec=..\Benchmark\Stuff\botmatchexec.txt -nosound

Truform ON
7.137811 / 39.591167 / 79.942940 fps rand[27729]
Score = 39.609997

Truform OFF
7.252662 / 38.462662 / 80.081848 fps rand[27729]
Score = 38.485172


By the way, the random value is different, but the botmatch was identical as far as I could determine (obviously it actually wasn't identical, but it isn't as significant a difference as the difference between these and my results with physics on "normal"). There were many key points to watch for similarity, so please accept my assurance that these results are atleast comparable.

So now we have my system with a slower CPU than his getting a higher average when Truform is on, and still being CPU limited even when CPU load is decreased. This highlights the significance of the hit he reports. I again observe that his average with Truform off is higher than mine, so his Truform score is not a matter of CPU limitation.

Also, the similarity of results with high detail actors on and off indicates geometry load is a good distance away from being a limitation on my card.
 
demalion said:
UseHardwareVS=True (don't know how I missed that ;) ).
MaxPixelShaderVersion=255

These mean absolutely different things for my card than they do for yours. You are simply not going to see the images I see on my card...well, I suppose they might have been able to pull some sort of magic with lighting and rendering the water with Radeon 7x00 features, but I really do doubt that at this time.

There's nothing spectacular going on with the water that requires special features, it's just a cubemap reflection, something that GF1/Radeon handles just fine.
 
Humus said:
demalion said:
UseHardwareVS=True (don't know how I missed that ;) ).
MaxPixelShaderVersion=255

These mean absolutely different things for my card than they do for yours. You are simply not going to see the images I see on my card...well, I suppose they might have been able to pull some sort of magic with lighting and rendering the water with Radeon 7x00 features, but I really do doubt that at this time.

There's nothing spectacular going on with the water that requires special features, it's just a cubemap reflection, something that GF1/Radeon handles just fine.

And the ripple deformation is done on the CPU, then? I guess I've been brainwashed by all the "pixel shaded water" games. What do you think is being done with the pixel shaders then? I have some screenshots already of some key comparison points, but I had the OpenGL renderer selected by accident (I took the shots before I did the benchmarks above). I'll redo them now after I restore my settings and post them here.

EDIT: Screenshots, for rubank.

Until I figure out how to do "3rd person", the best I can do for illustrating shadows at this time:

ut2k3shadows1.jpg


Are all these effects implemented fully on your card, rubank? I'd expect if they are it would be slower, even though they should be possible. Not that this particular effect is "all that"...

ut2k3shadows3.jpg


Here is the water I thought wouldn't be rendered the same on your card, but Humus reminds me as I tried to remind Bioware a short while ago that this can be done without pixel shaders...

ut2k3water1.jpg


And I didn't think this would be rendered the same on your card either...you'd probably have to zoom in to spot the deformation...I'll have to hunt to see if I have the video where they turned on something to illustrate the liquide effects to see what I can experiment with. It is most visible in the upper left (and that is barely visible at all).

ut2k3water2.jpg


But maybe I'm just without a leg to stand on...luckily I seem to have a spare one lying around...

ut2k3shadows2.jpg


Just thought I'd inflict that joke on while sharing my humiliation of actually being killed by an "Average" skill level bot with you all. :oops: Having one finger over the screenshot key at the time is the excuse I'm going to use...

As your flyby score is lower than mine, what with my my CPU and its memory being so much slower, I think we can agree that my card is faster atleast. AthlonXP 1800+, 512 mb DDR, versus Athlon Thunderbird @ 1.377 GHz (106 MHz FSB), 512 MB "SDR" PC133...I don't think the comparisons are surprising, atleast in the absence of screenshots from you.
 
Ok this isbery wierd results

I am currently a 2100+ with a SOYO Dragon Ultra KT333 Plantium Ed. with 512 of DDR 2100 and a ATI 8500 128mb version. I am getting really weird results when benching. This no joke about my scores this is NOT modified. My ini files have NOT been modified. All settings in the game are set to the highest they will go.

dm-antalus
14.111842 / 35.604176 / 67.342720 fps
Score = 35.618332

dm-antalus
22.839127 / 95.701920 / 378.031281 fps
Score = 95.729218

dm-asbestos
29.554932 / 139.115067 / 395.695953 fps
Score = 139.229004

ctf-citadel
16.947264 / 39.733650 / 87.680244 fps
Score = 39.789528

dm-antalus
0.696592 / 322.511200 / 3365.509766 fps
Score = 323.235229

dm-antalus
25.606262 / 96.038780 / 429.102661 fps
Score = 96.065208

The two lower benchmark scores around 30 fps are from a botch but the other 4 are from flybys. Now look at the one that said I got 300 fps a second and then look at the one under that they are the same benchmark. Now I have been dong alot of photo editing today and my system is acting very slow and is very fragmented. But how the HECK DID I GET 300 FPS A SECOND.

THIS IS NO JOKE EVERYTHING WAS SET TO THE MAX WHEN I RAN THAT BENCHMARK. I SWEAR THIS WAS NOT MODIFIED.

PLEASE HELP,
Raystream
 
Why don't you reboot your computer and rerun them if your computer is slow due to having done photo editing all day? How does it look to you while the benchmarks are running?

vogel might be interested in tackling why this resulted in such odd benchmark results, I certainly can't offer any useful guesses right now.
 
Back
Top