AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

38944.png


This is a "best-case" scenario, but I don't know how things have progressed in the driver department in hybrid crossfire between APUs and discrete GPUs.
Maybe it's more, maybe it's less, hence the "if".
Nonetheless, if the notebook is paired with 1600MHz DDR3, the jump in the iGPU performance should be considerable, as should be the hybrid crossfire results.

I am not talking about Fps that are read from the end of a Fraps run or minimum Fps during one. I am talking about playability and how much of the improvement actually translates into better gameplay.


(This is from August timeframe, BF BC2, medium details, 2x MSAA, no HBAO)

Not in gaming it isn't.
Depends on your definition of "gaming". Mine: Running benchmarks must be within the game itself, not in a canned benchmark that's easy to identify and optimize for. So, I'd at least exclude AvP, Battleforge and Far Cry 2 here, possibly more.
I am also not sure about running Crysis 2 and Shogun 2 in DX9 as well as using a purely CPU-limited setting for the higher end cards in Starcraft 2 by not enabling Antialiasing in the drivers which do specifically provide this option.

At 1920x1200 GTX580 averages 11.88% faster, 2560x1600 only 8.49% faster than HD6970 ( http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_560_Ti_448_Cores/27.html - using TPU as source as they have the largest list of games tested )
The more pixels need to be moved, the better for the Radeon and the worse for Geforce. I think it has something to do with the Geforce's inability to export more than two pixels per clock per SM. While that won't hurt gameplay very much in my opinion because only fps peaks are capped to a lower rate, it definitely shows in the benchmarks.
 
Last edited by a moderator:
On topic (and therefore a new post):
I am actually very excited about Southern Islands, and that's something I don't say lightly or use inflationarily. :)
 
A 580 is maybe 10-15% faster than a 6970. And a 6970 is maybe 50% faster than a 5870. And a 5870 is maybe 10% faster than my 4870x2. So, yes, only ~100% faster than my 4870x2 three years later is pretty shabby.

I want a return to the days of 9700pro being 5x faster than 4600Ti, after just 1 generation.

I want 90% of 6990 speed from a 7970, realistically.

Yeah, right. The key word is "maybe". But not exactly.
Wow. :oops:
Take a look at the TPU link given by your colleague only one post above.
17-18% at 2560 X 1600. :LOL:
 
Was going off a game and a resolution I play:

m2033-veryhigh.gif


6970 = 45 fps vs 5870 = 31fps = ~50%. In less demanding situations this is even worse, so my point makes even more sense. 50-100% improvement in over 3 years is a shamble!
 
On topic (and therefore a new post):
I am actually very excited about Southern Islands, and that's something I don't say lightly or use inflationarily. :)

Because you have a good indication of the power requirements and performance ;) :p
 
A 580 is maybe 10-15% faster than a 6970. And a 6970 is maybe 50% faster than a 5870. And a 5870 is maybe 10% faster than my 4870x2. So, yes, only ~100% faster than my 4870x2 three years later is pretty shabby.
.

Why are you comparing a dual GPU solution to single ones? You should be comparing to 6990.
 
Because I *have* a dual GPU card atm. My next card will *not* be a dual GPU card. 6970 and 5870 are single GPU cards.
 
That's understandable. But you cannot say they made just a 50-100% performance increase over 3 years. Actual performance increase is higher, you just don't include the top performing parts into your survey. It's also unfair to blame AMD / Nvidia, the biggest brake was TSMC and its 40nm process which halted evolution for almost 3 years.
 
Was going off a game and a resolution I play:

m2033-veryhigh.gif


6970 = 45 fps vs 5870 = 31fps = ~50%. In less demanding situations this is even worse, so my point makes even more sense. 50-100% improvement in over 3 years is a shamble!

I don't see how exactly you prove your point when 6970= 31 FPS vs 5870= 22 FPS. :oops: You are using a single application which obviously benefits most (and is exception rather than norm) from some improvements (Tesselation perhaps) in Cayman over Cypress. But in all games on average the performance difference is nowhere near 50%, only 15-20%.
 
I want a return to the days of 9700pro being 5x faster than 4600Ti, after just 1 generation.
I just laughed:LOL:
Interesting memory you got there...

From Guru3D (p)review of Radeon 9700 Pro:

Quake III 1280x1024 1600x1200
GeForce4 Ti 4600 179 141
Radeon9700 Pro
200 176

AquaMark
1280x1024 1600x1200
GeForce4 Ti 4600 45.1 33.2
Radeon 9700 Pro
61.7 44.9

Other tests showed similar gaps. There was a 50% advantage in higher resolutions of Aquamark when testing with an SDR instead of DDR system (Athlon XP platform, probably has to do with R300's wider memory bus), but that was the biggest difference.

I'm aware the gap grew bigger with later games when there was heavier shader use, but still...
at the time of its release, this 20-50% performance gain over Ti 4600 was all there was.

I assume you're talking about huge differences like between a 8800 GTX and 7900 GTX in "true" ShaderModel 3.0 games, but those times are over because such large gains were only possible due to the huge deficiencies of architectures like NV30 and G70/71. I think the last larger bottleneck existing in more recent architectures is Cypress/Cayman's limited setup/primitive-/geometry-/tesselation-pipeline, which is exactly one of the aspects where GCN looks like a tremendous improvement. In other aspects like raw shader-, texture- and pixel-throughput both AMD and Nvidia are mostly process-limited nowadays, and that won't change anymore.

 
But in all games on average the performance difference is nowhere near 50%, only 15-20%.

Given that I want *higher* performance, your 15-20% is only making my point even more for me. My 50% was a conservative worst case for the 6970. Thanks.

I just laughed:LOL:
Interesting memory you got there...

I am admittedly cherry picking results, but:

Jedi Knight 2 w/ AA & Aniso
'demo jk2ffa' @ 1600x1200
ATI Radeon 9700 Pro - 101.2
NVIDIA GeForce4 Ti 4600 - 21.6

Serious Sam 2 w/ AA & Aniso
Little Trouble @ 1600x1200
ATI Radeon 9700 Pro - 52.2
NVIDIA GeForce4 Ti 4600 - 24.3

Unreal Tournament 2003 w/ AA & Aniso
dm-antalus @ 1600x1200
ATI Radeon 9700 Pro - 31.3
NVIDIA GeForce4 Ti 4600 - 11.1

(not really playable either way, but the numbers hold up)

http://www.anandtech.com/show/970/20

Remember back in the day, AA wasn't really used in reviews (I doubt it's on in the benchies you linked to (edit: yeah, just re-read the Guru write-up, and thats a seriously poor review), but when it was (and AA was why I upgraded to a 9700 over a 4600), it was easily 2-3x faster than the Ti4600.
 
Last edited by a moderator:
I am not talking about Fps that are read from the end of a Fraps run or minimum Fps during one. I am talking about playability and how much of the improvement actually translates into better gameplay.

(This is from August timeframe, BF BC2, medium details, 2x MSAA, no HBAO)

Well, you can't discuss actual "playability" within a Crossfire/SLI setup without presenting V-Synced results, which pretty much invalidates those graphics.

For all the years I used both SLI (2*6800GT) and Crossfire (2*3870, 2*4870) systems, one thing I know for sure is that if I want to play games with a dual-card setup and it's using AFR, turning V-Sync on is pretty much mandatory.

I'm still amazed at how "professional" reviewers keep insisting in showing non-vsynced results for all multi-GPU tests, whereas the only solution where vsync should be passable is Lucid's Hydra.
It shows how little these people know about what they're doing, or at least how little they really care about their supposed customers (readers).
 
Ahhh, yes, I understand. Obviously I misinterpreted the performance numbers you posted above. From the look of it, they seemed to be measuring with Vsync off.
 
Ahhh, yes, I understand. Obviously I misinterpreted the performance numbers you posted above. From the look of it, they seemed to be measuring with Vsync off.

Irony is useless here.
I simply pointed out a number of general performance advantage when using hybrid crossfire, as stated by the appointed review:
If ACF can provide at least a 30% increase on average, like what we see in TWS2, it could be useful.
You're the one who started the "playability" discussion and proceeded to show vsync-less results (from an image stored in a webpage filled with awfully annoying ads, TBH).

______________________________________________________________________


VR-Zone says the card probably comes with 1,5GB of GDDR5.
I sure hope it's 3GB, as the card should be able to make use of more than 1,5GB of graphics memory.
Should come in handy for Crossfire and very high resolutions (Eyefinity).
 
Remember back in the day, AA wasn't really used in reviews (I doubt it's on in the benchies you linked to (edit: yeah, just re-read the Guru write-up, and thats a seriously poor review), but when it was (and AA was why I upgraded to a 9700 over a 4600), it was easily 2-3x faster than the Ti4600.
OK, point taken.

But like I said, those differences were mostly down to extreme weaknesses of certain older architectures, AA/AF for Geforce 4 Ti, SM2.0 for Geforce 5x00, SM3.0 for Geforce 6x00 and 7x00, AA/AF for Radeon 2000 and 3000 series. But starting with G80 for Nvidia and R700 for AMD, both got rid of those extreme weaknesses and bottlenecks (except for tesselation in AMD's case, maybe), that's why it's very unlikely we'll ever see that kind of jumps again.
 
Back
Top