AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

The only real problem AMD has right now is competing with Nvidia at throwing $ at game devs.

For AMD's sake I hope they don't think it's that simple. If it was then they could simply do the same and close the gap. Throwing $ around is pretty easy and doesn't require engineers or programmers to do so.
 
I think the biggest problem AMD have right now is finding themselves in a similar position they put nvidia in, hardware-wise, circa 58xx release. While the software situation has remained the same or even deteriorated.

Maxwell overclocks amazingly well and approaches 1.5Ghz routinely while Hawaii barely reaches 1200Mhz after heavy exertion. Though not everything is the same, their die sizes are close and even closer if you disregard nvidia's figure.

Size-wise, the GM204 falls somewhere in between the GK104 and the larger GK110. Where exactly is an interesting question. When I first asked, Nvidia told me it wouldn't divulge the new chip's die area, so I took the heatsink off of a GTX 980 card, pulled out a pair of calipers, and measured it myself. The result: almost a perfect square of 20.4 mm by 20.4 mm. That works out to 416 mm². Shortly after I had my numbers, Nvidia changed its tune and supplied its own die-size figure: 398 mm². I suppose they're measuring differently. Make of that what you will.

While Tonga hasn't improved appreciably compared to Tahiti, I do hope better results for Grenada( if it's based on Tonga and beyond) vs. Hawaii. But AMD would only have a realistic chance of besting 9xx if they can routinely hit 1.25Ghz. without setting the world on fire.
 
Also, what is Nvidia doing with all that R&D money?
In the segment were they compete head on, discrete GPU, AMD released 7 pieces of discrete silicon on 28nm. Nvidia released 12. (I may be low balling by one on both sides.)

AMD did only one family. Nvidia did one that roughly matched in terms of efficiency, and a second one that blew it away.

Last time I checked, AMD was (always?) first to adopt a new fab process.
They're roughly the same: they both calibrate their release schedule to coincide with the availability of a new process.

First to have GDDR4, GDDR5 and now HBM.
Was GDDR4 worth having?
GDDR5 would have been the same if Fermi didn't run into some major mishaps.
Will HBM turn out to be worth having for 28nm?

The only real problem AMD has right now is competing with Nvidia at throwing $ at game devs.
That, and the fact that Nvidia is outperforming AMD in about any metric you can think off.

It's a very interesting topic to compare how both companies are spending their R&D money, but they're really quite different companies, and the metrics that you're using are more than a bit pointless, I'm afraid.

You're probably going to answer that AMD was 2 months earlier on the market than Nvidia with 28nm, but that's arguing about a minute detail while ignoring the big picture. Anyone could come up with a bunch of minor stuff like you did for either company, but none of that provides insight into their spending.

What matter are the big things: x86 CPUs. ARM based mobile chips. ARM based servers. Major software products. Systems hardware or not. Automotive. Semi-custom silicon.

That's a bit more substantial than implementing a particular interface or a 2 month difference in schedule.
 
AMD's problem is that they need an architectural improvement for GCN and honestly that improvement needs to be included in their first finfet GPUs or they're going to have a bad time. They also need to "show up to the fight" instead of making Fermi delays looks short.

Take a look at what kind of GPUs (dies) NV and AMD were releasing back in the Tesla / Fermi days. AMD was competing with (slightly slower but still comparable) NV's big dies with their sweet spot 250-350mm^2 dies. AMD could also skimp on PCB and cooler design because their cards were more efficient.

Jump to to Kepler and early 2012, suddenly a mainstream NV die that would previously have been used in a GTX x60 level card was on par with a large-ish sweet spot Tahiti. On top that the NV card can be manufactured cheaper with NV's now superior memory controllers (less complex PCB needed due to lower width bus) and low GPU power consumption (less complex and robust power delivery). NV's profit margins soar.

Jump to 2013 and the GK110 and Hawaii battle. Everyone knew GK110 was coming, and with a superficial look it wasn't much of an issue for AMD, hawaii did still beat it and it was actually smaller too. However look at what was needed compared to the Fermi days: launched ~7 months later than GK110 (compared to ~6 months earlier), ~440mm^2 die (compared to ~330), higher/same power (compared to considerably lower with cypress), more complex PCBs and more memory chips (compared to less complex and less chips with cypress). To compete with GK110 AMD had to give up their sweet spot strategy.

Jump to 2014 and GM204 vs. Hawaii situation. AMD clearly doesn't have big enough architectural improvements ready that it would make sense to release a hawaii replacement (they still don't since hawaii will be going into the 300 series), because of this they're competing against slightly faster cards with their own cards that are/have: bigger dies, more complex PCBs, more memory chips, higher cooling costs, etc. and because the Nvidia cards are faster, cooler and quieter AMD is having to play the value route (massive price cuts to the 290 series) pretty much destroying their margins for their first non-sweet spot die in a long time. Meanwhile NV margins go even higher.

Now on to mid 2015. GM204 is still selling, GM200 has come out, selling at massive prices, not requiring anything exotic as far as technology goes and NV's margins are sky high. AMD's newest super high end chip hasn't come out yet (late again), however it's almost 100% certain that in order to compete with GM200 AMD has had to come pretty close to matching NV's huge 500-600mm^2 die sizes (up from ~330mm^2 in 2010) while also adopting a completely new memory technology that will actually help them in die size and power budgets compared to the technology that Nvidia is using. And it's also extremely likely that AMD has had to resort to watercooling to clock their card high enough to compete properly.

I'm not even going to go into how it's a safer bet for NV to play big dies due to their HPC experience in interconnect tech, software etc. (because HPC means high margins) however the real question comes up next:

What happens in (mid-late) 2016?

-Nvidia moves to 14/16nm
-Nvidia moves to HBM
-AMD moves to 14/16nm

In mid 2016 (most likely date) the HBM advantage (die size and power savings) that AMD has had to play to compete in this high end round is going to be gone. Both NV and AMD are going to be on HBM and most likely on the same ff process from TSMC.

If AMD does nothing to their architecture before that mid 2016 point arrives they're going to be left with GPUs that are slower, more expensive to make and consume more power than their competitors. And that scenario is just going to feed into the never ending loop of earning less money, spending less on R&D, canning projects, being less competitive and again earning less.

So what AMD sorely needs are architectural improvements to GCN. Not necessarily right now because right now they can play some cards that given them an advantage that NV can't get. But that luxury stops before 2016 is over.

my 2c :D
 
Excellent post Alatar. AMD definitely needs to hit an architectural home run to have any chance of improving their fortunes. Playing the pricing game only works if you have a competitive product and favorable production costs. They have been struggling with both as of late.

Nvidia's aggressive marketing doesn't make it any easier but that's what a company is supposed to do. Complaining about it isn't going to get you anywhere.
 
So what AMD sorely needs are architectural improvements to GCN. Not necessarily right now because right now they can play some cards that given them an advantage that NV can't get. But that luxury stops before 2016 is over.


What advantage are you thinking? HBM1 is not an advantage at this point especially with the 4 GB limitation, found only on the high end cards and with limited quantities/high price tag. There might be a "wow" factor associated with HBM usage that would to help sell products (PR) but would definitely be limited in scope. But I might be missing something ...
 
What advantage are you thinking? HBM1 is not an advantage at this point especially with the 4 GB limitation, found only on the high end cards and with limited quantities/high price tag. There might be a "wow" factor associated with HBM usage that would to help sell products (PR) but would definitely be limited in scope. But I might be missing something ...
HBM1 is still an advantage , the Fiji cards should still be the fastest 1080p and 1440p cards out there. The 4gigs will be a limit for 4k but depending on price it may still be a better fit than a titan x. The 980TI would be the monkey wrench for them.
 
GCN is behind maxwell, but not that far behind. Per shader efficiency of Maxwell should be about, figures takes from hardware.fr,

[(105/96)*(2816/2048)] / [(1240/1040)] = 1.26 times of GCN.

AMD clearly doesn't have big enough architectural improvements ready that it would make sense to release a hawaii replacement.

Look at the synthetics here,

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/19

Hawaii with Tonga improvements could easily cut that advantage into half. AMD can easily truck on that with HBM till arctic islands if not for their clockspeed disadvantage. Seeing the WCE in those leaked slides as upto 1050Mhz was quite disheartening. nvidia have quite the headroom with maxwell cards to release x85 versions.
 
HBM1 is still an advantage , the Fiji cards should still be the fastest 1080p and 1440p cards out there. The 4gigs will be a limit for 4k but depending on price it may still be a better fit than a titan x. The 980TI would be the monkey wrench for them.

If Fiji beats a Titan X, the advantage will not be due to HBM, but rather to increased power efficiency and/or extreme cooling (at 350-400 W). For HBM to be an advantage, Hawaii would need to be strongly bandwidth bound. Indications are that it is not bandwidth bound (especially with Tonga's color compression, remember that 40% bandwidth savings AMD was touting?). Maxwell also doesn't seem to be very bandwidth bound, but rather shader bound/power bound. Fiji needs more shaders to compete with GM200, not more bandwidth.
 
HBM1 is still an advantage , the Fiji cards should still be the fastest 1080p and 1440p cards out there. The 4gigs will be a limit for 4k but depending on price it may still be a better fit than a titan x. The 980TI would be the monkey wrench for them.

4GB will be a limit for some games, not so much for others.
GTA V for example will use more than 4GB if maxed out at 4K, but The Witcher 3 seems to use low amounts of memory. Even the people playing it in 4K don't seem to go over 2GB of VRAM usage.

Fiji needs more shaders to compete with GM200, not more bandwidth.
As far as I know, 64CUs will be more than enough for compute tasks.
What AMD sorely needs is geometry performance, at least to counter the geometry viruses we've been finding in Gameworks titles.
 
Last edited by a moderator:
If Fiji beats a Titan X, the advantage will not be due to HBM, but rather to increased power efficiency and/or extreme cooling (at 350-400 W). For HBM to be an advantage, Hawaii would need to be strongly bandwidth bound. Indications are that it is not bandwidth bound (especially with Tonga's color compression, remember that 40% bandwidth savings AMD was touting?). Maxwell also doesn't seem to be very bandwidth bound, but rather shader bound/power bound. Fiji needs more shaders to compete with GM200, not more bandwidth.

Increased power-efficiency is precisely the main benefit of HBM.
 
What advantage are you thinking? HBM1 is not an advantage at this point especially with the 4 GB limitation, found only on the high end cards and with limited quantities/high price tag. There might be a "wow" factor associated with HBM usage that would to help sell products (PR) but would definitely be limited in scope. But I might be missing something ...

HBM, smaller form factors, possibly watercooling (we already know that the ref 980Ti is a normal air cooled blower card).

The thing with HBM is that even though the 4GB limit is a nasty one I was talking more about the benefits it's going to give AMD as far as the Fiji die goes and also the power consumption benefits. You get to axe a lot of the memory controller logic from the GPU die and the power consumption of the whole memory subsystem is going to go down drastically leaving AMD with more power budget for everything else.

Those power and die size advantages gained by going HBM are only going to happen once with Fiji. The competition is going to get them in the next generation, and that's when AMD needs some serious architecture updates. Unless NV completely screws up something that is.
 
Increased power-efficiency is precisely the main benefit of HBM.
Don't be deceived: HBM helps decrease memory interface power, but that's a small fraction of overall GPU power. It will shave 30 W from Hawaii's 300 W budget, but 30 W of extra shaders is not sufficient.
 
Back
Top