AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

ycxci.png


Z8KLR.png



I can count 32 symmetrical structures on the Tahiti die and 10 for Cape Verde.
No cookie monster here. :p
 
In $100-200 segment customers were get used to get 50-100% performance increase in games every year.
Since 2010 this failed (5770 ~ GTX 260/HD4870=$150@2009). And now we have ~30% performance increase from 2009 to 2012 => ~10% performance in each of this three years.
 
Really? So these are no better than the 5770 (and it's rebranded sibling, the 6770)? At all? Because that's what they're replacing -- NOT the 6850, or 6870, or 6950, or 6970. Until you realize this, you're just spouting nonsense and comparing entirely unrelated product lines.

Last year's TOP END television may still do things better than today's brand new midrange television -- and likely do it at a lower cost so long as you don't mind it being already a year old. Does that make the new midrange device less competent or marketable? Nope.

Our subjective opinions or even hard facts about what is replacing what don't matter when two products stand side by side on the store and the faster card is cheaper and has no clear drawbacks. It is the better buy of the two. The prices I've thrown around aren't some super clearance deals either, but have been standard prices for the cards for a while now.

TV's have been quite stagnant for a while. I'm thinking that high end Sony/Samsung/LG TV from 2011 is better in probably every way than their midrange model in 2012, but you are also going to have difficulties finding one cheaper.
 
So then, you just rehash the same argument for every video card release? Because that's basically what you've stated, albeit in more words. Let's be specific: not much has really changed for NVIDIA since Fermi was released back in the G80 days. Logically, I should assume that your lamblasting of prior versions of the same architecture when the new MSRP of the new part somehow exceeds the devalued pricepoint of the former top-end part exists equally between vendors. It should therefore be a trivial matter to search for your prior posts lamenting NVIDIA's similar treatment of prior releases.

Interestingly, I find no such lamenting over similar NV foibles.

Is the 77xx series NOT a different architecture than the devices before it? The new architecture has a laundry list of new features that simply did not exist on the former platforms. Your performance-per-devalued-dollar complaints are really not interesting; I'm sure you can buy a Honda Civic that will do 130mph in the same way that a Hyundai Genesis sedan will. Does that make them functionally equivalent?

I suppose it might, if your only measuring stick was the speedometer.
 
Increased compute performance doesn't benefit current games (except for BF3 and a few others), but as DX11 API gets used more, we will see more games using DirectCompute for their lighting (deferred lighting) and post processing (screen space ambient occlusion, antialiasing, etc). Later we will surely see many more algorithms moving to GPU, so the performance gap will become even wider in the future. Middle class GCN based hardware is a more future proof choice than 6850. It might be on par when running existing DX9/DX10 games, but in the future games (and with the future drivers) it will outperform the old architecture by a wide margin.

46% faster LuxMark score is one indication that future lighting pipeline runs much faster on a hardware that is designed for computate workloads.


As always of course when looking to "future workloads" though, the question is if you'll even own the card by the time that becomes relevant, and the answer is usually "no". But sure, it counts for something, maybe even a lot.

BTW, 5770 seemed to be AMD's most popular card in a long while, going by Steam Survey I believe (it was the only AMD card amongst all Nvidia in the 6 or 7 most popular individual SKU's). It was a $100 card that joe schmoe could buy and it ran everything, in recent times.

Just saying in general, AMD needs something that replicates that.
 
So then, you just rehash the same argument for every video card release? Because that's basically what you've stated, albeit in more words. Let's be specific: not much has really changed for NVIDIA since Fermi was released back in the G80 days. Logically, I should assume that your lamblasting of prior versions of the same architecture when the new MSRP of the new part somehow exceeds the devalued pricepoint of the former top-end part exists equally between vendors. It should therefore be a trivial matter to search for your prior posts lamenting NVIDIA's similar treatment of prior releases.

Interestingly, I find no such lamenting over similar NV foibles.

You keep shouting that devalued stuff a lot, but 6850 has been at 149$ for some time now and didn't even start much higher. Give me some examples where AMD or nVidia has in the past released a card that is more expensive and slower than their old card, preferably an example where the it's not some firesale of few last units. 6850 again has been sold at 149$ for quite a bit of time. I suppose there might be some dual gpu cards that fit the bill... but imo that's sligtly different.

You can be sure that if nVidia comes in with GTX 670 at 399$ and performs less than GTX 570 or with a 179$ GTX 650 Ti and performs less than GTX 550 Ti, that I'll reign fire on them just as much.
 
Just saying in general, AMD needs something that replicates that.

The three die strategy looks a kind of mistake, even NV is said to go for 4 Kepler dies (107, 106, 104, 110).

They should aimed for 4 dies:
  • ~100mm² - 8CU (~ HD 7750 as HD 7670)
  • ~170mm² - 14-16 CU @ 192-Bit 4,5Gbps (a HD 7700 series with >6800 performance)
  • ~260mm - 20-22 CU @ 256-Bit
  • Tahiti
 
When focusing on single settings (click on "Erweiterte Optionen..." instead of a general, arithmetic mean between a plethora of undocumented stuff, the picture becomes a little clearer though:

1080p with 4x MSAa and 16:1 AF for example:
Code:
http://ht4u.net/reviews/2012/amd_radeon_hd_7700_test/index4.php?dummy=&advancedFilter=true&prod[]=AMD+Radeon+HD+5770+%2F+HD+6770&prod[]=AMD+Radeon+HD+7770+%40+5770+Takt&filter[0][]=1920&filter[2][]=4&filter[3][]=16&aa=all
The link does not work the usual way, browser-c'n'p does though.
 
Shifting the goalposts now? Fine, here's mine: HD7770 uses ~45 watts less than GTX 460 (~55 watts if you look at the 1GB GTX 460) (based on average power consumption from http://www.techpowerup.com/reviews/HIS/HD_7750_7770_CrossFire/21.html). It's not even a contest.

It's the price that matters and I don't think anyone can honestly argue that the 77xx series is well positioned versus its 2.5 year old forbears. The lower power consumption doesn't even come close to compensating for that. In fact that's expected from a new process.

On the other hand I don't blame AMD at all for setting the prices that they have if they think the cards will still sell well. It's time for them to make some money.
 
You keep shouting that devalued stuff a lot, but 6850 has been at 149$ for some time now and didn't even start much higher. Give me some examples where AMD or nVidia has in the past released a card that is more expensive and slower than their old card, preferably an example where the it's not some firesale of few last units.

8600GTS versus 7950GT is an old example, however that was the last time NV made an architecture change. Fermi hasn't changed much since then. GCN is not the same architecture as the prior VLIW models, so your comparison is about as relevant as Fermi was to the G7x cores.
 
Also, the 123mm2 die are, 4 memory chips, and low power consumption (less complicated PCB) suggest that the cost of the card should be quite small.
People doesnt realise that the 28nm shrink from the 40nm 5k-6k cards is quite a leap. While the price folows the 5k-6k-7k transition.

Nvidia will probably want back the lost market share and price the new cards aggressively for a much less profit. Only fool would buy new AMD cards before Nvidia release.​
 
8600GTS versus 7950GT is an old example, however that was the last time NV made an architecture change. Fermi hasn't changed much since then. GCN is not the same architecture as the prior VLIW models, so your comparison is about as relevant as Fermi was to the G7x cores.

That's a terrible example. The 7950 GT was $299 while the 8600 GTS was $229 and was still widely panned for being crap. You're actually supporting Dr.Evil's point....
 
It's the price that matters and I don't think anyone can honestly argue that the 77xx series is well positioned versus its 2.5 year old forbears. The lower power consumption doesn't even come close to compensating for that. In fact that's expected from a new process.

On the other hand I don't blame AMD at all for setting the prices that they have if they think the cards will still sell well. It's time for them to make some money.

Yeh, Cape Verde is disappointing, but I think/hope pitcairn is where the goodies will be. At least now we know March 6 is the date.

If there arent price performance goodies in Pitcairn, we just have to wait for Nvidia. Which is the smart thing to do anyway before buying a GPU right now.

Edit: Looking at how well GCN shader counts seem to do, where 640 SP 7770 is only 29% slower than 1120 SP 6870 according to Tech Power Up, the 1408 and 1280 SP pit cairn parts should be pretty sweet, especially if AMD can get the highest end one in at 299. In fact it seems like they'd push the 7950, which unfortunately would also suggest a 349 price.
 
Hardware.fr: http://www.hardware.fr/articles/855-8/performances-theoriques-geometrie.html

The scaling of tessellation performance from Cape Verde to Tahiti looks a kind of worse (@ no/low culling)? :???:
Strange yes. Tahiti manages 2 tris/clock without tesselation if culled or not but falls back to below 1 tri/clock unculled with tesselation. At least Cape Verde (being limited to 1 tri/clock anyway) has absolutely no problem there and beats GTX 460 7 polymorph engines every day of the week :).


Yeh, Cape Verde is disappointing, but I think/hope pitcairn is where the goodies will be. At least now we know March 6 is the date.
I don't think the chip is disappointing, though pricing certainly leaves something to be desired.
AMD even kept the 2 ACEs it seems, the chip should be quite compute-friendly and it's got all the new features.
The 7750 is quite close to 5770 (might catch it with new drivers though it wildly varies from game to game), I'm still surprised though AMD went with such a large difference to 7770 (I guess disabling 2 CUs so they can really use all chips for the 7750, and only 800Mhz to make it easily fit into 75W, plus the cheap-ass voltage regulation seems to prevent really higher clocks as some apparently can't even be overclocked to 850Mhz). Too bad it wasn't possible to clock it at 850Mhz as that would have been enough to catch the 5770 (with current drivers). Though PowerTune already seems to kick in with default clocks in some titles (AvP at least), but in any case that's still way below 75W.
I'm totally surprised they went with the asymmetric CUs though I guess that will further fuel the speculation about the chip being 12 CUs :).
 
I'm totally surprised they went with the asymmetric CUs though I guess that will further fuel the speculation about the chip being 12 CUs :).
Asymmetric CUs are bizarre. But the "die shot" we've already seen rules out 12 CUs.
 
Asymmetric CUs are bizarre. But the "die shot" we've already seen rules out 12 CUs.
Actually I should have said "asymmetric CU groups". But I guess those CU groups really are little more than some structure to save some complexity by sharing some caches.

In any case, I find it remarkable how well GCN does in compute tasks. EG/NI was "hit or miss" depending on the task compared to competition which could easily be explained by the different simds (and cache structure too).
But GCN really excels in all tasks if you look at these results:
http://www.anandtech.com/show/5541/amd-radeon-hd-7750-radeon-hd-7770-ghz-edition-review/21
For instance in SmallLux the HD5770 excelled leaving even GTX560 in the dust - you could argue that this workload was particularly suited for vliw5 hence the cards performing somewhere corresponding to their peak flop rating. But even the paltry 7750 manages to be faster with quite a large deficit in flops (compared to both GTX 560 and 5770). Granted in some compute benchmarks (e.g. CiV) it still trails the GeForces with similar flops but at least it improved considerably. Things like the DirectX11 compute sdk sample the score improved two-fold with 40% less flops (for the 7750 compared to 5770) easily beating GF104 in the process. AMD must have done some things right...
(Though things like pcie bandwidth might play a role in some of these benchmarks, haven't checked.)
 
I'm totally surprised they went with the asymmetric CUs though I guess that will further fuel the speculation about the chip being 12 CUs :).
Have they shown some graph saying they're asymmetric? They could just aswell have just 5 CU's in there behind 1 setup pipe?
 
Back
Top