AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

I'm quite sure it was 250W with +20%, even though at least some reviews mention it being at 0%.

At least IMO even the official slides suggested it, mentioning Max Board Power as 250W, and "typical gaming power" at mere 190W

Anyone to confirm this?

I'm 99% sure that the 250W max board power was when Powertune was at default untouched setting at 0%, which left the user an opportunity to lift the ceiling 20%. I don't know if the card actually used that much in real life situations or not. Someone should be able to confirm it once and for all though :)
 
Depends entirely on Apple pushing (or not) for quadrupling the resolution on Macs. If they do push, the prices could tumble very quickly.

Remember what the original 22" Apple Cinema Display (1999) cost? It was a relatively strange resolution but is the precursor of later 1680x1050 monitors. Hint, it wasn't even remotely cheap. And it would still be over 5 years before those were somewhat affordable.

Apple pushed the 23" Cinema HD Display in 2003. Again not very cheap or affordable. And again over 6 years until 23-24" (1080p or 1200p) were mostly affordable. Although at least they hit under 1k within 4 years.

Apple then pushed the 30" Cinema HD Displays in 2004. And... They are still not affordable to most. The most recent 27" version (with lower resolution although higher pixel density) is still 1k USD.

So pardon me if I don't hold my breath for 4k displays to be even remotely affordable in the next 5+ years. Especially with there being so many different "4k" resolutions (3840x2400, 3840x2160, 4096x2160, just to name a few).

Don't take this to mean that I don't hope it happens. I've been wishing for a doubling of panel pixel density for years and years now. And while this wouldn't be a doubling (only 1.5x the current 30" pixel density for 30" 4k displays), it's at least progress in that arena.

Regards,
SB
 
oclabru-7970-2.jpg

Some quick number crunches:
7970:
2,048 SP * 925 MHz = 1894400 SP*MHz
3.79 TFLOPS / 1894400 SP*MHz ~ 2*10^-6 TFLOPS/SP*MHz (2.006334)

4.31B Transistors * 925 MHz = 3986.75 BTrans*MHz
3.79 TFLOPS / 3986.75 BTrans*MHz = 9.506*10^-4 TFLOPS/BTrans*MHz

6970:
1536 SP * 880 MHz = 1351680 SP*MHz
2.70 TFLOPS / 1351680 SP*MHz ~ 2*10^-6 TFLOPS/SP*MHz (1.997514)

2.64B Transistors * 880 MHz = 2323.2 BTrans*MHz
2.7 TFLOPS / 2323.2 BTrans*MHz = 1.162*10^-3 TFLOPS/BTrans*MHz

Also, pixel fillrate is a direct proportion of core clocks.

So, correct me if I'm wrong, but it looks like they bought no performance improvement with GCN (if you look at a pure TFLOPS metric which isn't a direct correlation to game performance I know) because the improvement in FLOPS is a direct relationship between the SP increase and clock increase.

However, the FLOP number is roughly a 40% increase, and 40% is right around what we're hearing in leaked benches.

So, they could have achieved the same thing by shrinking Cayman and scaling the architecture up theoretically.

It also seems they are spending more transistors to get the same FLOP performance, so there is obviously some extra functionality packed in there. Power gating, 3D support + other stuff?

I guess the takeaway is that GCN in itself isn't revolutionary in terms of game performance and this iteration is more about the die shrink, added features, lower idle power and directx 11.1 compliance?
 
Last edited by a moderator:
Some quick number crunches:
7970: 2,048 SP * 925 MHz = 1894400 SP*MHz
3.79 TFLOPS / 1894400 SP*MHz ~ 2*10^-6 TFLOPS/SP*MHz (2.006334)

4.31B Transistors * 925 MHz = 3986.75 BTrans*MHz
3.79 TFLOPS / 3986.75 BTrans*MHz = 9.506*10^-4 TFLOPS/BTrans*MHz

6970: 1536 SP * 880 MHz = 1351680 SP*MHz
2.70 TFLOPS / 1351680 SP*MHz ~ 2*10^-6 TFLOPS/SP*MHz (1.997514)

2.64B Transistors * 880 MHz = 2323.2 BTrans*MHz
2.7 TFLOPS / 2323.2 BTrans*MHz = 1.162*10^-3 TFLOPS/BTrans*MHz

Also, pixel fillrate is a direct proportion of core clocks.

So, correct me if I'm wrong, but it looks like they bought no performance improvement with GCN (if you look at a pure TFLOPS metric which isn't a direct correlation to game performance I know) because the improvement in FLOPS is a direct relationship between the SP increase and clock increase.

However, the FLOP number is roughly a 40% increase, and 40% is right around what we're hearing in leaked benches.

So, they could have achieved the same thing by shrinking Cayman and scaling the architecture up theoretically.

It also seems they are spending more transistors to get the same FLOP performance, so there is obviously some extra functionality packed in there. Power gating, 3D support + other stuff?

I guess the takeaway is that GCN in itself isn't revolutionary and this iteration is more about the die shrink, added features, lower idle power and directx 11.1 compliance?

Well that's the big part. The new architecture is supposed to be much more efficient no ? I know the leaked benches don't show that everytime but... we will see... Plus I guess they played it "safe", new architecture, new process, new a lot of things...
 
So, correct me if I'm wrong, but it looks like they bought no performance improvement with GCN (if you look at a pure TFLOPS metric which isn't a direct correlation to game performance I know) because the improvement in FLOPS is a direct relationship between the SP increase and clock increase.
Your math just calculates the fact that they use FMA units.

So, they could have achieved the same thing by shrinking Cayman and scaling the architecture up theoretically.
Probably not. The expectation some have for linear improvement is unreasonable. There were signs Cayman's scaling over its predecessors was petering out.
 
Your math just calculates the fact that they use FMA units.

Yes, I realize that when considering FLOPS alone. I thought it was interesting to calculate nonetheless to show their equivalency.


Probably not. The expectation some have for linear improvement is unreasonable. There were signs Cayman's scaling over its predecessors was petering out.

Can you elaborate on that? (something that shows the arch hitting a brick wall with scaling?)

Well that's the big part. The new architecture is supposed to be much more efficient no ? I know the leaked benches don't show that everytime but... we will see... Plus I guess they played it "safe", new architecture, new process, new a lot of things...

Sure, it's more efficient in that you get more performance per watt, but that can be attributed to a die shrink. It would seem that a GCN on a 40nm node would be less efficient than Cayman simply because it spends more transistors per clock to get the same amount of performance. You can't really compare across process nodes like that, but I don't see any other thing you can infer from it. I'm not saying I disagree with AMD did because they did add relevant features, I just wouldn't call it a performance king.
 
VLIW4/5 was already getting good utilization for most graphics workloads, what you need to look at is things like compute shaders to see the performance advantage for GCN over VLIW. So Civ 5 uses compute shaders, look at its performance improvement, what else uses them?

edit: quick google says shogun 2 does as well.
 
VLIW4/5 was already getting good utilization for most graphics workloads, what you need to look at is things like compute shaders to see the performance advantage for GCN over VLIW. So Civ 5 uses compute shaders, look at its performance improvement, what else uses them?

edit: quick google says shogun 2 does as well.

most dx11 games use HBAO/HDAO via compute shaders..
 
Can you elaborate on that? (something that shows the arch hitting a brick wall with scaling?)

For one thing, the gap between the 6950 and 6970 is a lot smaller than theoretical FLOPS numbers would suggest. In fact, in practice a 6950 overclocked to the same frequency as the 6970 performs almost as well; which, by the way, was also true for the 5850 and 5870.

So you can't really assess how much of an improvement in efficiency Tahiti is, because you don't know how well—or poorly—Cayman would have scaled. However, there's a very good chance that either Pitcairn or Cape Verde will end up with a FLOPS count very similar to some Evergreen/NI chip (could be Cypress, maybe Barts, Juniper or Cayman). Playing around with clocks could even bring it to perfectly equal numbers. This would allow for more straightforward efficiency comparisons. Until then, calculating flops/transistor ratios won't tell you much.
 
However, the FLOP number is roughly a 40% increase, and 40% is right around what we're hearing in leaked benches.
Looks more like a 45% increase overall to me - with some compute stuff showing way better results obviously. So from that point of view it seems it is really more efficient per flop (plus as others mentioned, I don't think you could really upscale Cayman - give it twice the numbers of simds and performance probably would increase way less than even 50%).
From another angle though, it isn't particularly great - 60% more transistors for 45% more performance. But considering the compute focus and the likely impossibility to upscale Cayman easily maybe that's really all that was possible.
 
I'm looking at the Skyrim graph, and I'm 100% sure there aren't enough ROPs. Reports are that later this year well see 4k monitors @ 30-36", and 2880 @ 17". Giving Tahiti only 32ROPs is a colossal mistake. With the rumoured price, the people on 1080p res probably won't go for this card, and we're already seeing graphs where it's no faster than a 580 @ 2560. Failure all round.

But it's OK! WINZIP is accelerated. AMD, pull your head out your arse plz.

Where?
Also, there's games where even 6970 is faster than 580 at even lower resolutions than that, let alone at such high res where 6970 catches up to 580 anyway.
There will always be few games where card x performs better/worse than "it should" or "it does in general"

Oh, and Skyrim is really CPU limited game.
 
Can you elaborate on that? (something that shows the arch hitting a brick wall with scaling?)
There were a number of games that had the 6970 uncomfortably close to the 5870. In some, the older chip was bracked in performance by the 6970 and 6950, despite a 20% increase in transistor count. Barts was rather close as well to the 6950 in some places.

Each generation returned less than a 1:1 improvement for increases in execution resources, outside of areas where the previous generation tanked.
RV770 had some very large gains compared to RV630, but it should be noted that there were cases where it more than doubled in resources, and there were notable shortcomings in RV630.

Sure, it's more efficient in that you get more performance per watt, but that can be attributed to a die shrink.
Power improvement for the node jump was given as being ~40%, with no improvement in transistor performance or an increase in transistor count.
The shrink alone would not be enough to get power savings and an increase in overal performance as GCN appears to get.
 
So.....how long do you guys think it will take for AMD to release 1.5GB versions of these babies (if ever)?

What price drop could we expect from a 7970 3GB to a 7970 1.5GB?

Also will there be 7950s at 1.5GB from the beginning, since it is said that AMD unleashes the AIB partners choices with this one?

thanks
 
So.....how long do you guys think it will take for AMD to release 1.5GB versions of these babies (if ever)?
I doubt they ever do that. Some custom AIB 7950 parts might come with only 1.5GB, but I don't think 7970's will ever, not even as custom designs.
 
I doubt they ever do that. Some custom AIB 7950 parts might come with only 1.5GB, but I don't think 7970's will ever, not even as custom designs.

There are 1GB 6950s, and they do just fine in most games under "normal" conditions. It might be a good, cost-efficient answer to GK104.; unless Pitcairn manages do handle that job, but that would mean either a remarkable achievement on AMD's part, or an unfortunate failure on NVIDIA's (assuming GK104 is indeed the 768-SP, ~360mm² part it is rumored to be).
 
For one thing, the gap between the 6950 and 6970 is a lot smaller than theoretical FLOPS numbers would suggest. In fact, in practice a 6950 overclocked to the same frequency as the 6970 performs almost as well; which, by the way, was also true for the 5850 and 5870.

So you can't really assess how much of an improvement in efficiency Tahiti is, because you don't know how well—or poorly—Cayman would have scaled. However, there's a very good chance that either Pitcairn or Cape Verde will end up with a FLOPS count very similar to some Evergreen/NI chip (could be Cypress, maybe Barts, Juniper or Cayman). Playing around with clocks could even bring it to perfectly equal numbers. This would allow for more straightforward efficiency comparisons. Until then, calculating flops/transistor ratios won't tell you much.

There were a number of games that had the 6970 uncomfortably close to the 5870. In some, the older chip was bracked in performance by the 6970 and 6950, despite a 20% increase in transistor count. Barts was rather close as well to the 6950 in some places.

Each generation returned less than a 1:1 improvement for increases in execution resources, outside of areas where the previous generation tanked.
RV770 had some very large gains compared to RV630, but it should be noted that there were cases where it more than doubled in resources, and there were notable shortcomings in RV630.


Power improvement for the node jump was given as being ~40%, with no improvement in transistor performance or an increase in transistor count.
The shrink alone would not be enough to get power savings and an increase in overal performance as GCN appears to get.

Thanks, those answers help put it in perspective.
 
Remember what the original 22" Apple Cinema Display (1999) cost? It was a relatively strange resolution but is the precursor of later 1680x1050 monitors. Hint, it wasn't even remotely cheap. And it would still be over 5 years before those were somewhat affordable.

Apple pushed the 23" Cinema HD Display in 2003. Again not very cheap or affordable. And again over 6 years until 23-24" (1080p or 1200p) were mostly affordable. Although at least they hit under 1k within 4 years.

Apple then pushed the 30" Cinema HD Displays in 2004. And... They are still not affordable to most. The most recent 27" version (with lower resolution although higher pixel density) is still 1k USD.

So pardon me if I don't hold my breath for 4k displays to be even remotely affordable in the next 5+ years. Especially with there being so many different "4k" resolutions (3840x2400, 3840x2160, 4096x2160, just to name a few).

Don't take this to mean that I don't hope it happens. I've been wishing for a doubling of panel pixel density for years and years now. And while this wouldn't be a doubling (only 1.5x the current 30" pixel density for 30" 4k displays), it's at least progress in that arena.

Regards,
SB

The cinema displays have far less volume than Macs. High dpi on Macs would drag all the PCs into high dpi era, driving volumes which will trickle down everywhere else.

Besides, the latest thunderbolt display integrates a lot of hw which is quite useful. If thunderbolt wasn't limited to a single vendor, this display would be quite valuable as a dock for tablets, phones and laptops. I believe it does not have a direct competitor as far as features (gigE, audio, USB, camera, bluetooth??) are concerned.
 
Back
Top