AMD: R7xx Speculation

Status
Not open for further replies.

That's not too surprising, given that the 512MB 8800 GT in SLI (a card which is currently selling for as low as ~$145 WITHOUT rebate on newegg) can "outperform" the GTX 280 in average framerate in many games depending on the settings:

http://www.anandtech.com/video/showdoc.aspx?i=3334&p=11

So sure, a 4850 Crossfire would also "outperform" the GTX 280 in terms of average framerate in many games depending on the settings, as it has even higher performance than 8800 GT SLI. Unfortunately I don't believe that many of these impressions are realistic based on actual gameplay experience and stable framerates.

And on that note, where does it end? A reviewer could argue that the $290 512MB 8800 GT SLI system should easily "outperform" the 4870 in average framerate too. Is that realistic in terms of real world game play? I'm not so sure.
 
Last edited by a moderator:
HD4850 vs 4870

GT1 : 8.62 - 12.57 - 146%
GT2 : 7.13 - 12.72 - 178%

FT1 : 651.67 - 779.86 - 120%
FT2 : 3.41 - 5.49 - 161%
FT3 : 18.11 - 20.58 - 114%
FT4 : 14.23 - 17.02 - 120%
FT5 : 27.59 - 33.58 - 122%
FT6 : 48.92 - 53.73 - 110%

GPU Score : 2692 - 4316 - 160%
GT2 looks like it's bandwidth limited. Interesting to see Perlin Noise (FT6) scaling by less than 20%.

Those scores make GTX260 sort of dead, don't they? Apart from GPU Cloth, which is a disaster zone on ATI.

http://www.extremetech.com/article2/0,2845,2320125,00.asp

Jawed
 
I'm not up on the cloth feature test, but it's described as heavily using vertex and geometry shaders.
So why is it such a problem for AMD?
R6xx would seem to do very well, at least from the description.
 
Apart from GPU Cloth, which is a disaster zone on ATI.
That's annoying, since this test should be in the realm of a stronger aspects of the R600 marchitecture. Well, it's not the first time a brand new DX10 code path mysteriously drags -- I still remember the performance woes with D3DRightMark v2.0 on older drivers.
 
HD4850 vs HD4870 :

Xtreme preset :

HD4850 vs 4870

GT1 : 8.62 - 12.57
GT2 : 7.13 - 12.72

FT1 : 651.67 - 779.86
FT2 : 3.41 - 5.49
FT3 : 18.11 - 20.58
FT4 : 14.23 - 17.02
FT5 : 27.59 - 33.58
FT6 : 48.92 - 53.73

GPU Score : 2692 - 4316

Source : TweakTown

http://www.tweaktown.com/news/9691/index.html

From where i got the results for comparison with HD 4850 :

http://img101.imageshack.us/img101/3754/vantagextremestock2808tn4.jpg
http://img101.imageshack.us/img101/8016/vantageud4.jpg

Look at the pixel fill rate test 5.49 GP/s . More than 16 ROPs ?

The article says it's 4850. It seems overclocked though.
Why the memory bandwidth shows 57.6GB/s? Isn't it supposed to be more?
 
Look at the pixel fill rate test 5.49 GP/s . More than 16 ROPs ?
Huh why? This is a pure bandwidth test, no wonder it scales perfectly with memory bandwidth. 16 ROPs would be enough for 10 GP/s at 625Mhz (IIRC this test does single texture / alpha blend with fp16 render target, which the rv770 should handle with full performance).
 
So sure, a 4850 Crossfire would also "outperform" the GTX 280 in terms of average framerate in many games depending on the settings, as it has even higher performance than 8800 GT SLI. Unfortunately I don't believe that many of these impressions are realistic based on actual gameplay experience and stable framerates.

And on that note, where does it end? A reviewer could argue that the $290 512MB 8800 GT SLI system should easily "outperform" the 4870 in average framerate too. Is that realistic in terms of real world game play? I'm not so sure.
You can't compare SLI & Crossfire. Crossfire works... :p
 
That's not too surprising, given that the 512MB 8800 GT in SLI (a card which is currently selling for as low as ~$145 WITHOUT rebate on newegg) can "outperform" the GTX 280 in average framerate in many games depending on the settings:

http://www.anandtech.com/video/showdoc.aspx?i=3334&p=11

So sure, a 4850 Crossfire would also "outperform" the GTX 280 in terms of average framerate in many games depending on the settings, as it has even higher performance than 8800 GT SLI. Unfortunately I don't believe that many of these impressions are realistic based on actual gameplay experience and stable framerates.

And on that note, where does it end? A reviewer could argue that the $290 512MB 8800 GT SLI system should easily "outperform" the 4870 in average framerate too. Is that realistic in terms of real world game play? I'm not so sure.


good point. what i care about is MINIMUM fps.
 
I'm not up on the cloth feature test, but it's described as heavily using vertex and geometry shaders.
So why is it such a problem for AMD?
R6xx would seem to do very well, at least from the description.
If you ever read the digit-life reviews (for example), you can see that the DX10 feature tests are all over the map. In several GS tests NVidia clobbers ATI, and in others it loses.

It looks like both architectures really have their quirks with the GS. I think I remember Humus mentioning ATI and NVidia preaching different uses and best practices.
 
If you're referring to the Techreport tests, that remains true... in the flyby.

Apparently, for in-game tests, ATI does a lot better.
That doesn't really answer my question, though. I specifically remember ATI doing fine without AF in the tests I'm talking about, whether they were flyby or in-game.

I'm just wondering if anyone here knows that AF is automatically enabled in the "High Quality" or "Very High Quality" settings on TR.

EDIT: Never mind. It looks like Crysis has no in-game settings for AF, so the answer is no.
 
Last edited by a moderator:
The 9800GX2 is getting beat or equaled in 3 out of 4 resolutions by the single 3870 there also, not sure what to make of that test.

We run Crysis in DX10 on Very High mode, and we're running the regular island batch benchmark (for a variety of reasons).

Needless to say, it looks like the 9800 GX2 has some "issues" with higher resolutions an AA settings running Crysis in DX10/Very High. It's a stutter-y mess. My guess is it's a driver issue.

But yeah, you can't compare one site's benchmarks against another's if the methodology isn't the same.
 
Looking at that 4870 GPU-Z screen, the 900 actually makes sense, but it might not be QDR but rather a GDDR5 feature.

From the extremetech preview of GDDR5: link

Bandwidth first: A system using GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second. Think of a GeForce 9600GT, for example. The same speed GDDR5 on the same bus would deliver 115.2 GB per second, or twice that amount. Take any GDDR3 bandwidth on a given clock rate and bus width and double it, and you get GDDR5's bandwidth. Of course, the marketing guys love big numbers and would undoubtedly not call it 1800MHz, just as 1800MHz GDDR3 is really running at 900MHz. Expect the marketing guys to call memory at that speed 3200MHz.

Maybe Jason can expand on that a bit more?

GDDR5 sure does have a lot of pretty and new features :oops:
 
Expect the marketing guys to call memory at that speed 3200MHz.

Had to laugh at this bit. If GDDR5 is indeed 4x900(or double 1800) then it should be 3600 and not 3200 unless GDDR5 was set at 800Mhz. But since he's referring to 900Mhz..

US
 
Yeah that looks like a typo or math error but it does seem like GDDR5 has a real frequency but multiplied twice over what the GDDR3 effective would be for that same frequency
 
Status
Not open for further replies.
Back
Top