Anand's HL2 benchmarks

Must ... read ... faster!

Me: " Brain, you’ve got to give me more power! "
My brain: " Aye, Captain, I’m givin’ ya all she’s got! "
Me: " Brain, I’ve got to have more power! "
My Brain: " Captain, I cannie give ya anymore power! "
(Adapted from here. :D )

Edit: Holy cow, the 9200 fares pretty well! Here's hoping those results translate to my 9100. Note that the 9200 is also running a better-looking mode than the 4600, PS1.4 vs. PS1.1. Pity, no screenshots....

Edit the Second: Worth noting:
One of the benefits of moving away from memory bandwidth limited scenarios is that enhancements that traditionally ate up memory bandwidth, will soon be able to be offered at virtually no performance penalty. If your GPU is waiting on its ALUs to complete pixel shading operations then the additional memory bandwidth used by something like anisotropic filtering will not negatively impact performance. Things are beginning to change and they are beginning to do so in a very big way.
 
:oops:

What I found most interesting was the Radeon 9200 beating the 5600 Ultra. (This is using the same DX 8.1 rendering path.)

There were apparently some rendering errors on the 9200 though, so we can't accept that as legitimate for the moment.

I wonder what quality differences there are between the 8.1 and 8.0 path, and if the 9600 would gain any appreciable performance by using the 8.0 path? (8.0 path doesn't seem to help the 5200 much though...)
 
No surprises. To sum up. If you really want to enjoy HL2 buy an ATI card. I whish Anand would elaborate on this though..
Over the coming weeks we'll be digging even further into the NVIDIA performance mystery to see if our theories are correct; if they are, we may have to wait until NV4x before these issues get sorted out.
 
I have never known nVIDIA to be slack with their drivers in the past when they had no competition.
I believe that in this case there seems to be a hardware issue to why the PS performance is slow.
 
A good read - what I found most interesting was that ATi's mid-range solution (R9600 Pro) was beating the 5900 Ultra :oops:. Man, nVidia is in a shitload of trouble if all DX9 games perform like this on their hardware. What is the probability of the NV36/38 fixing this? Unlikely/nil I guess. *shrugs*.

Why the hell is the NV3x lacking so much in pixel shader power though? You'd think they would have understood that this is one of the most important things in regards to DX9 performance.

The other things that this article bought to my attention are:

1. nVidia may have actually been correct with the use of 128-bit memory bus with NV30, if things are going from memory bandwidth limited to computationally limited. Pity nVidia don't have the GPU power to back up the lesser bandwidth of the NV30 though. (and I guess the extra bandwidth will be used for AF and FSAA instead, which can only be a good thing) Either way, at the end of the day, they screwed the pooch, plain and simple.

2. It looks as though this really may be the dawn of a new era with the graphics industry (we've heard it has been coming for a while, but it is now upon us - yay!). I see this as a good chance for new players/old players with new hardware to get back into the industry, and actually make things semi exciting again!
 
Joe DeFuria said:
:oops:

What I found most interesting was the Radeon 9200 beating the 5600 Ultra. (This is using the same DX 8.1 rendering path.)

There were apparently some rendering errors on the 9200 though, so we can't accept that as legitimate for the moment.

I wonder what quality differences there are between the 8.1 and 8.0 path, and if the 9600 would gain any appreciable performance by using the 8.0 path? (8.0 path doesn't seem to help the 5200 much though...)

8500 Owners should be happy, and faster....9200 is a stripped down 8500 :D
 
martrox said:
Payback's a bitch...... nVidia is now reaping just what it has sowed!
Cripes, is there no end to the schadenfreude online? Seriously, I hate nV's recent tactics as much as the next informed guy, but I don't think comments like this benefit either B3D or its readers.

/me falls off high horse
 
1. nVidia may have actually been correct with the use of 128-bit memory bus with NV30, if things are going from memory bandwidth limited to computationally limited. Pity nVidia don't have the GPU power to back up the lesser bandwidth of the NV30 though. (and I guess the extra bandwidth will be used for AF and FSAA instead, which can only be a good thing) Either way, at the end of the day, they screwed the pooch, plain and simple.

Anand is wrong :LOL:

Memory will always be a factor, FSAA and other goodies need it..bigtime.
 
Took me a second to realize all their graphs are the same, i.e. from the techdemo. Was wondering why the numbers looked so familiar. Well, that and the fact that the comments don't match the graphs either. :LOL:
 
Doomtrooper said:
8500 Owners should be happy, and faster....9200 is a stripped down 8500 :D

I'm not sure how well known this is, but the 9000 and 9200 have some pixel shader performance fixes when compared to the 8500, especially when dependant texture reads are involved, so that's not a guarantee.

Also, if you remember early UT2003 beta benchmarks, the 8500 did quite well, but fell back when the final release came out.

We'll just have to wait and see, I guess.
 
The Nvidia owners are going to be using Det.50, and see a good size speed increase. Whether or not the drivers are full of hacks wont matter to them. They certainly wont like hearing about all of the optimizations from people in the ATi camp.
 
Interesting. But i'm missing some FSAA/AF benchmarks and screenshots of the different modes :) I would also really like some benchmarks with a older CPU (since i have a puny 1.3 GHz Athlon myself).

Things aren't looking good for Nvidia though. The 5200 ran at half the speed of the 9200 when both were running the DX8 path's. Ouch.. Although Anand mentioned some problems with the 9200 and it will of course be interesting to see what happens with the Det 50's so i won't make any final judgements yet.

It also seems that all you who bought the R9700 can safely hold on to your cards for quite some time. Imo, it acomplishes more then the Voodoo2 as a long lasting card. Cause it not only has the raw speed to do it. It also adds major features. Features that are usable and by looking at their main competitor, will probably be usable for quite some time.

And for a response from Nvidia click here
 
Yep didn't know that, but they removed one texture unit from each pixel pipeline AFAIK. Plus only 230 MHZ clock speeds...vs 275-300 mhz for 8500's.

You are correct though, I was just speculating.
 
Joe DeFuria said:
:oops:
What I found most interesting was the Radeon 9200 beating the 5600 Ultra. (This is using the same DX 8.1 rendering path.)

If you remember some of the ATI's sales pitches for the 9200 way back, they were saying that the 5200 is just too slow for DX9, and due to the DX9 transistor load requiring other cutbacks, it was slower than the 9200 in DX8.

I guess the 5600 also shares that flaw to a certain extent. Looks like it's back to good old 512x384 for 5200/5600 owners. :D


Something else that's quite interesting is that the 5600 performs at far less than half of the 5900's performance, and the 9600 performs at far greater than half of the 9800's performance. Both budget GPU's are essentially half of their parent GPU's in terms of bandwidth and pipelines. Any speculations as to why we're seeing this?
 
Wasn't a bad read but it would have been MUCH better with some screenshots to show the difference between the various dx modes. Particularly the difference between dx9 and mixed mode. Hint. Hint. Dave? Anything coming up at beyond3d?
 
Back
Top