AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

No, it most certainly is not. It's a fact of reality. Different generations have different properties, including power efficiency and so on, comparing a newer and an older gen GPU is not going to be entirely accurate. What I said is true, in broad terms, which obviously means taking generational differences into consideration.
Thanks for pointing these trivial observations.

If that's the case, it's a point which you've yet to prove. :)
What is there to prove? That AMD is more absent from the laptop space than ever? That the power efficiency of Maxwell based GPUs seems to resonate with consumers? The whole GTX750Ti introduction was targeting the demographic of those who can power their GPU from the motherboard only. AMD got stomped with the R9 290X for their cooling solution, which was a direct consequence of their lack of energy efficiency. Yes, it was also because they cheaped out on the cooler itself, but that would have been a non-issue had it been efficient in the first place.

For example, is nobody buying AMD R290 cards at all, because their power efficiency is worse than NV 9x0? If that's the case it would be news to me at least.
Nice straw man. You know very well that AMD GPUs are a great value for money. But I thought you'd also understand that their low price is not because they are in a position of strength?
And why is it always necessary to strip all nuance from an argument to pretend-score cheap points? Even a 600W R9 290 would find takers if priced low enough. That doesn't mean that power efficiency isn't crucial in today's designs.
 
I'm inclined to believe the loud cooler in the vanilla R290 is a lot more responsible for swaying people towards nVidia solutions.

Looking at how much difference it actually makes in the yearly expenditure, I think that choosing one card over another because of 100W of consumption difference to be a bit silly.
Pro-gamers could look at it differently, but for people playing ~5-10h a week it's just silly. We're looking at what, less than 10€ yearly?
Noise is a completely different matter, though. Make a card terribly noisy and people will be bothered by it at all times.

As for the poor performance on add-in cards, I think this was expected. AMD won all three major console designs while being a very indebted company. They couldn't go on a hiring spree so they had to redirect huge chunks of their resources for those projects. This means they didn't have enough man-power to evolve the 28nm designs as much as nVidia did.
20/16nm could be a turning point on that matter.
 
As for the poor performance on add-in cards, I think this was expected. AMD won all three major console designs while being a very indebted company. They couldn't go on a hiring spree so they had to redirect huge chunks of their resources for those projects. This means they didn't have enough man-power to evolve the 28nm designs as much as nVidia did.
20/16nm could be a turning point on that matter.

Right with you, still we have 980-970 who have been released 3-4months ago, and the 290-290x who have been released 2 years ago ( maybe a bit less ) and at the same time of the 780-780TI -Titan GK110.

Then you have a different "philosophy" between thoses gpu's: the 980 is a pure gaming gpu, FP16/32, nearly no FP64 support, small memory controller, small size, really fast clockspeed ( up to 1300+mhz turbo clock speed ).. compared to GK110 / GCN, aimed at pro and gaming market,
big 512bit memory controller ( can be coupled to 16GB memory on Firepro ), high FP64 support ( 1/2 rate on Firepro ) big size etc..

Im pretty sure the next AMD gpu's is too not only aimed at Gaming... and we will need compare with GM200 ( and maybe the middle class gpu from AMD with the 980 ),.

Lately for judge an architectures, not only gpu's, i look at all the lineup as i do with CPU's so include Professional one at the moment they use the same architecture.
 
Last edited:
As much as I believe that power efficiency is a critical factor for today's GPUs, it's too easy to use that as a reason for AMD not selling a whole lot right now. Someone can correct me, but even in the worst of the Fermi days, I don't think Nvidia ever lots the market lead from AMD? The reason for AMD not being as successful as you'd expect are much more complex than just power efficiency.
 
As for the poor performance on add-in cards, I think this was expected. AMD won all three major console designs while being a very indebted company. They couldn't go on a hiring spree so they had to redirect huge chunks of their resources for those projects. This means they didn't have enough man-power to evolve the 28nm designs as much as nVidia did.
20/16nm could be a turning point on that matter.
But then again, apparently they are bringing 28nm 2,5D GPU on interposer with HBM memories, which certainly can't be a "walk in the park"?
 
But then again, apparently they are bringing 28nm 2,5D GPU on interposer with HBM memories, which certainly can't be a "walk in the park"?

Of course not. Just having the first mass-produced HBM chip out there in the market (after the Vita?) should be a great feat of its own - and very risky to boot.

From what I understand, the outcome of new generation of graphics cards depends mostly on what is done during the ~1,5/2 years prior to their launch.
The first GCN cards were launched in early 2012. Then, the new-gen consoles hit in late 2013, almost 2 years later. So during those two years (probably some months earlier because the chips had to be finished in time for mass production), I guess most of the GPU staff was redirected to the console projects.
This left the "Radeon teams" with a much smaller (skeleton?) crew that made small updates to the GCN 1.0 architecture which eventually took form in Bonaire, Hawaii and Tonga. But all of these were incremental updates to the architecture at hand and nothing like the Kepler -> Maxwell evolutions we saw from nVidia. AMD simply didn't have the manpower to do so.

My guess is somewhere during Q3 2013, after the consoles were finalized, the "Radeon teams" got back together and 1,5/2 years later in Q1/Q2 2015, we'll see their work in the shelves.
I believe the new chips will be really competitive in performance/power and heat this time around.

Let's just hope AMD doesn't screw everything up by pairing the new cards with loud coolers, which is becoming kind of a signature move from AMD...
 
There's a fair amount of feature commonality between Bonaire/Hawaii and the console APUs. The PS4's Onion+ found itself in Kaveri, as well. At various points, it looks like a lot of this got put into a common bucket. At least some unprofessional leaks from AMD staff that compared the two architectures directly point to sharing of portions of the development process.
The process seems very muddled from the outside, and the stuttering/confusing development of APU and GPU products (wtf is up with Tonga, wtf was wrong with Jaguar's turbo, wtf is up with AMD's horrendous geometry performance scaling, wtf happened to Skybridge, wtf is going on with AMD's ARM product, wtf is up with the nixing of integrated Freedom Fabric) may point to significant organizational and technological disruptions as well.

It's also probably not fair to blame the consoles solely for the lack of evolution of AMD's grpahics tech. AMD's emphasis in GCN for compute was such that GCN was not a major shift on the special-purpose side from generations prior.
 
I wouldn't use the term "blame" on the consoles. At the very least, I consider the consoles a pretty valid excuse for AMD not being able to keep up with nVidia in performance/die-size and performance/power.

As for AMD's confusing development, it seems 90% of it comes from whenever there's a CPU core attached to it (APUs and CPUs). You didn't mention the most puzzling thing to date: wtf happened with Kaveri's memory controller? (BTW, are they preparing to repeat the same mistake with Godavari?)
 
I wouldn't use the term "blame" on the consoles. At the very least, I consider the consoles a pretty valid excuse for AMD not being able to keep up with nVidia in performance/die-size and performance/power.
Given that Bonaire and Hawaii contain IP codeveloped with the consoles, what shape would AMD's GPUs have taken without someone paying for it?

As for AMD's confusing development, it seems 90% of it comes from whenever there's a CPU core attached to it (APUs and CPUs). You didn't mention the most puzzling thing to date: wtf happened with Kaveri's memory controller? (BTW, are they preparing to repeat the same mistake with Godavari?)
I'm not sure exactly what you mean by that.
If you mean the purported loss of GDDR5M, the rumor was one of the two memory partners AMD was counting on for that memory type imploded. (The implosion wasn't a rumor, but AMD's need for a second source was the alleged reason.)

I don't know enough about Godavari. Is it actually a new chip, or is this another Richland scenario?
 
It looks like godavari is desktop only with some increase in clock but full hsa, while excavator will be just for mobile
 
Ta muchly :)

Have not really been paying much attention but this is seriously whats being expected huh?
28nm because smaller processes not ready yet yes, but isn't there a significant risk that all this exciting techy stuff winds up using a lot of power at 28nm for limited performance gain & possibly hidden clock problems or other hook?

But even if a smaller process was available would you want to be messing with that stuff on a new process.
Might it be better to do a mid-range with all the bells & whistles, more conservative high-end refresh waiting until new process is available?
 
Ta muchly :)

Have not really been paying much attention but this is seriously whats being expected huh?
28nm because smaller processes not ready yet yes, but isn't there a significant risk that all this exciting techy stuff winds up using a lot of power at 28nm for limited performance gain & possibly hidden clock problems or other hook?

But even if a smaller process was available would you want to be messing with that stuff on a new process.
Might it be better to do a mid-range with all the bells & whistles, more conservative high-end refresh waiting until new process is available?

HBM is actually supposed to save a good bit of power, about 50% of the power consumed by GDDR5 while providing about twice the bandwidth according to some slides, if I remember correctly. Presumably this includes the power drawn by the memory chips, the bus and the PHY on the GPU.
 
Back
Top