AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Alexko said:
I mean, if your design has a huge gaming_perf/FLOPS ratio, but doesn't actually outpace the competition, is more expensive to make and draws more power, how does that help you?
I thought the 580 was generally faster than the 6970?
 
I know geometry isn't considered sexy anymore but does anybody know if Tahiti has "full" geometry throughput under normal (non-tessellation) circumstances or is it (artificially) limited in the same way that Fermi is to preserve value of their professional cards?
 
I think looking at gaming_perf/FLOPS is taking "academicness" to the point of silliness. For a given process and similar performance level, the best design is the one that gets the best performance/(size×power).

Do you forget what site you're posting on? :) We like to under the hood.

I mean, if your design has a huge gaming_perf/FLOPS ratio, but doesn't actually outpace the competition, is more expensive to make and draws more power, how does that help you?

Well in nVidia's case it helped them break into the HPC market and establish themselves as the de facto pioneers of the GPU computing industry. Seems like it helped a whole lot! The big question is what they can do to retain that crown now that AMD has caught up. Notice that AMD's gaming perf/transistor has dropped dramatically with Southern Islands, a price nVidia already paid long ago.
 
Do you forget what site you're posting on? :) We like to under the hood.

What does that have to do with the genesis of this discussion thread within the thread? It had nothing to do with looking under the hood. Which was a guy blindly criticizing GCN for not having better GP/flops.

Notice that AMD's gaming perf/transistor has dropped dramatically with Southern Islands, a price nVidia already paid long ago.

But Kepler is MIA, so, that's not looking too good right now for Nvidia. Till then some can dream of vast performance increases from Kepler, but I'm skeptical. If they pull off say a performance doubling, then good for them. Even then AMD should be ok thanks to the usual svelte die and dual GPU leader business. Combined with they may have a refresh to counter Kepler.

Gaming perf/transistor compared to 5870, I dont think it has dropped at all. 7950 is double 5870 sometimes.
 
What does that have to do with the genesis of this discussion thread within the thread? It had nothing to do with looking under the hood. Which was a guy blindly criticizing GCN for not having better GP/flops.

What thread? This one was started by AlphaWolf's question on Kepler's potential efficiency compared to GCN.

But Kepler is MIA, so, that's not looking too good right now for Nvidia. Till then some can dream of vast performance increases from Kepler, but I'm skeptical. If they pull off say a performance doubling, then good for them. Even then AMD should be ok thanks to the usual svelte die and dual GPU leader business. Combined with they may have a refresh to counter Kepler.

What has AMD's svelte die and dual GPU leadership gotten them the last 3 generations? Absolutely squat. In any case we don't need to even consider Kepler when talking about GCN vs Fermi. If you're only concerned with die sizes that's fine. Other people will question why Tahiti isn't faster given its theoreticals. You can play the efficiency card many ways.

Gaming perf/transistor compared to 5870, I dont think it has dropped at all. 7950 is double 5870 sometimes.

And compared to Cayman? Btw, where did you get 7950 numbers?
 
Gaming perf/transistor compared to 5870, I dont think it has dropped at all. 7950 is double 5870 sometimes.
Key word here is "sometimes" though. That's more of a best case. Average is more like 65% for twice the transistors (for the 7970 vs 5870 I assume a typo there). Things like the improved geometry throughput or faster tesselation don't come for free though and won't have any effect in a lot of games so a drop there in overall efficiency is probably expected. And, it's probably more difficult to get good efficiency at the high end, Pitcairn should tell us more.
I don't expect any wonders though from Kepler neither.
 
What thread? This one was started by AlphaWolf's question on Kepler's potential efficiency compared to GCN.



What has AMD's svelte die and dual GPU leadership gotten them the last 3 generations? Absolutely squat. In any case we don't need to even consider Kepler when talking about GCN vs Fermi. If you're only concerned with die sizes that's fine. Other people will question why Tahiti isn't faster given its theoreticals. You can play the efficiency card many ways.



And compared to Cayman? Btw, where did you get 7950 numbers?

I meant 7970...

Your point was something like "AMD is paying now for moving to compute so they suffered game perf per transistor drop". Yet they really didn't (much, within expected norms) compared to 5870 which was a brute force architecture, so, at the least, you've been proved wrong there. Not even considering the supposed larger than normal driver gains AMD has promised for GCN. It must be something besides the move to compute that resulted in decrease vs Cayman. We can assume the 8970 may well ~double Caymen at the same transistors as Southern Islands.


This sub discussion within the thread was started when somebody denounced GCN for not delivering more gamingP/Flops. It wasn't an architecture deep dive.

I dont know, AMD seems ok, I think their "svelte die" has gotten them you know, a successful position in the market for several gens now, against pretty strong anti-AMD headwinds too. Has it led them to dominate Nvidia or something? No. But unless we know exactly how profitable each GPU division is on these particular dies we have no idea whats going on really. Suffice it to say it would seem AMD should be more profitable per die for several gens. The company bottom line where some will point is meaningless here as it depends on countless other factor and markets. Including AMD's now pitiful CPU division...

You can play the efficiency card many ways.

Yes, but some are not particularly relevant. It's simply wrong to say "GPU X has more flops and less relative game performance to those flops so it's bad". It's not a meaningful metric compared to others, period. Not to say it shouldn't ever be discussed or something.

If anything more flops/less game perf is probably good. It implies a few things, more untapped muscle with some potential to be tapped eventually, likely slightly better future games performance the longer you own the gpu, and so on. Those things are somewhat trivial though compared to the biggies, to be sure.

I would rather in general own an AMD GPU with perhaps double the teraflops as it's Nvidia performance counterpart. Presumably the Nvidia GPU is humming along at near max efficiency already, so there isn't much upside.

In practice this probably again, is not very relevant, since Nvidia GPU's have kept up reasonably well until the next comes along. More relevant by far than Gemperf/flop, but still not very.
 
Actually I have a question about why they have both GDS and L2, I once thought that as long as L2 support write back, L2 is a GDS.
 
Do you forget what site you're posting on? :) We like to under the hood.

Sure, and that's fine, I enjoy it as much as any other B3Der. Looking at the choices and trade-offs architects make is always interesting, but when talking about efficiency, we should still choose criteria that matter to the bottom-line, even if we like to look at other things because they give us more general insight.

Well in nVidia's case it helped them break into the HPC market and establish themselves as the de facto pioneers of the GPU computing industry. Seems like it helped a whole lot! The big question is what they can do to retain that crown now that AMD has caught up. Notice that AMD's gaming perf/transistor has dropped dramatically with Southern Islands, a price nVidia already paid long ago.

Yeah but that's a distinct issue. In games, the high perf/flops ratio is irrelevant. In compute workloads, Fermi is often so much faster than Cypress or even Cayman that it ends up more efficient (power- and area-) but the perf/flops ratio is still irrelevant, perf/(cost×power) is still what matters.

That's not to say that the choices made by NVIDIA were bad, just that they should be assessed in terms of criteria that actually matter to them and their customers, not theoretical figures that have no impact on anything.

As for perf/transistor dropping with Tahiti, it's true, but it was pretty much bound to drop because of the law of diminishing returns alone. I think Pitcairn is likely to have about as many transistors as Cayman, or maybe Cape Verde will be close to Barts. Either of those two pairs should allow for more meaningful comparisons.
 
...AND 7970 comes reasonably close to 2x 3GB 580s for a hella less $ and no CFX/SLI issues on that 3 monitor system.

What really counts is that people will be able to buy New Zealand and put it in their computers. Obviously the 7990 will be the best single graphics that has been or will ever be sold in the world. When it comes out the R300 will be considered merely the second best card of all time. :p

You are of course permitted to think differently if AMD choose to codename their next line of cards after U.S. states and they select Ohio as one of the codenames. :p
 
As for perf/transistor dropping with Tahiti, it's true, but it was pretty much bound to drop because of the law of diminishing returns alone. I think Pitcairn is likely to have about as many transistors as Cayman, or maybe Cape Verde will be close to Barts. Either of those two pairs should allow for more meaningful comparisons.
CV only has a 128 bit mem interface though, so it probably won't come very close to Barts in terms of performance no matter how close it might be in terms of transistors.

Pitcairn vs. Cayman might be a closer call, though.
 
Your point was something like "AMD is paying now for moving to compute so they suffered game perf per transistor drop". Yet they really didn't (much, within expected norms) compared to 5870 which was a brute force architecture, so, at the least, you've been proved wrong there. Not even considering the supposed larger than normal driver gains AMD has promised for GCN. It must be something besides the move to compute that resulted in decrease vs Cayman. We can assume the 8970 may well ~double Caymen at the same transistors as Southern Islands.

Sorry, you can't prove me wrong with random unsubstantiated claims. At TPU the 7970 is only 70% faster than the 5870 @ 2560x1600 for over twice the transistor count. It's only 38% faster than the 6970 for 65% more transistors. It just gets worse at lower resolutions. Where exactly was I proven wrong? :)

I dont know, AMD seems ok, I think their "svelte die" has gotten them you know, a successful position in the market for several gens now, against pretty strong anti-AMD headwinds too.

Where are your numbers to back that up? AMD graphics division still isn't making money. Just look at their statements.

Yes, but some are not particularly relevant. It's simply wrong to say "GPU X has more flops and less relative game performance to those flops so it's bad". It's not a meaningful metric compared to others, period. Not to say it shouldn't ever be discussed or something.

Meaningful to who? Die size is a rather mundane topic. Yes it's relevant to manufacturing cost but does it provide any insight into architectural details? You're free of course to pretend those questions dont exist while staring blankly at die sizes but it wont make the questions go away.

I would rather in general own an AMD GPU with perhaps double the teraflops as it's Nvidia performance counterpart. Presumably the Nvidia GPU is humming along at near max efficiency already, so there isn't much upside.

How has that worked out for you in the past?

Sure, and that's fine, I enjoy it as much as any other B3Der. Looking at the choices and trade-offs architects make is always interesting, but when talking about efficiency, we should still choose criteria that matter to the bottom-line, even if we like to look at other things because they give us more general insight.

The die size argument is simply sticking your head in the sand. Just looking at the die sizes of say the 580 vs 5870 won't tell you very much about anything.

That's not to say that the choices made by NVIDIA were bad, just that they should be assessed in terms of criteria that actually matter to them and their customers, not theoretical figures that have no impact on anything.

Die sizes don't matter at all to me as a customer. I'm not sure how you can start talking about architectural efficiency without first understanding the architectures you're discussing ;)

As for perf/transistor dropping with Tahiti, it's true, but it was pretty much bound to drop because of the law of diminishing returns alone. I think Pitcairn is likely to have about as many transistors as Cayman, or maybe Cape Verde will be close to Barts. Either of those two pairs should allow for more meaningful comparisons.

We'll see soon enough, I'm not sure how much "compute" Southern Islands can drop for lower cost parts. It seems like everything is already well balanced.
 
2011122803343375.png
 
Actually I have a question about why they have both GDS and L2, I once thought that as long as L2 support write back, L2 is a GDS.
GDS is not really a cache, per se. It's not even a standard memory construct in any cross-vendor API, and acts much like the LDS, but for global synchronization (hence the name) and it's not intended to cache the access to the global memory. This is where the L2 comes to play here.
I guess the GDS has some useful applications, as it can be manipulated on a kernel level, but I'm not really sure what in the presence of a coherent L2. Certainly not for bandwidth amplification or data stream-out between pipeline stages.
 
Back
Top