AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

It is DDR. I mean... everyone can visit JEDEC's web site and get a copy of the JESD235 HBM DRAM standard. To me, HBM 1 or 2 is likely just about SK Hynix's own implementation of HBM.

Upon further review, I agree that I mixed up the signaling rates for the standard. I may have conflated the transition for the first and second WideIO memory standards, which does go from SDR to DDR.
The variations may be something the standard as it is would not see as being separable.

A slide from Hynix seemed to apply a distinction between HBM1 and HBM2 where 1 was 1Gbps with a granularity of 32B, while 2 was 2Gbps with a granularity of 64B.

Perhaps this fits in with pseudo channel mode where Hynix chose to increment capacity, bandwidth, and burst length at the same time.
I think pseudo channel works by using the same bank twice, by mapping a logical bank such that a physical bank has half its columns mapped to one or the other bank. An activation would appear to activate two half-pages. There would be twice as many pages since HBM2 starts at double the capacity, so the pages have somewhere to stretch to.
A burst length of 4 doesn't need two successive cycles with column addresses like BL2, since the address corresponding to the first half of the burst implicitly carries over to the other half.
However, what if the setup was tweaked so that something was supplied between two successive bursts, giving a column address corresponding to the other pseudo-channel?
(edit: possible error, the banks are physically split into sub-banks, so this may be feeding the additional address into the other sub-bank, which would be more flexible than the same page)

Legacy mode would keep to the standard, whereas the tweaked HBM2 module and a suitably updated controller would perform a little outside math and fill in a cycle on the column address bus that wouldn't normally be considered necessary.

It may be that the lower capacity HBM1 modules do not supply enough bits from their arrays to heavily tax the interface before running into their own timing constraints, so it's DDR that decides to run half as fast.
 
Last edited:
Looking through the number, and assuming the 390x is indeed about the same performance level as the Titan X in terms of pushing out compute/shaded pixels, then while the 390x may indeed have more bandwidth than is needed for 4k it might not be by much, if at all.

Considering 640gbs is, a good multiplier of the PS4's that synchronizes with the jump in resolution from 1080p to 4k, and the straight compute power of the Titan X compared to the PS4 is within similar though not linearly scaled bounds, then it'll be up to the 390x amount of ROPs and clockspeed to see where the bottleneck is. Of course the "4k!" performance might not last, as we'll see games run at < 1080p as time goes on with the PS4. Inevitably better pixels can just be traded off for less of them, and like the last generation of consoles that will probably just happen more and more as time goes on. I wouldn't be surprised to see a lot of 900p and less games running on the PS4 by the end of the generation.

Regardless, depending on the title the huge bandwidth advantage HBM will confer could give a large advantage to the 390x at higher resolutions. Post Script: Looking at Hawaii, what sort of silicon advantages would AMD see if they cut FP 64 on Hawaii from 1/8th to 1/16th? They'd want to sell compute from their highest end card after all, so that seems like an obvious savings.

There's the colour compession technology of GC1.2+ to be considered as well. So the bandwidth wouldn't just be 4x higher than the PS4, it would potentially be a lot more in real world scenario's. The expected spec of the 390x also suggests something in the 8TF+ range (4096 shaders at around 1Ghz) which is also comfortably more than 4x the PS4.

I also don't think 4K requires linearly 4x more processing power to achieve than 1080p. Obviously on paper it does but performance rarely scales so linearly with resolution. In fact we already see arguably greater than PS4 1080p performance from the 290X at 4K which is far less than 4x a PS4.

So in short, I'm expecting the 390x to handle PS4 games, even if they run at 900p on the console with relative ease at 4K.
 
Regardless, depending on the title the huge bandwidth advantage HBM will confer could give a large advantage to the 390x at higher resolutions. Post Script: Looking at Hawaii, what sort of silicon advantages would AMD see if they cut FP 64 on Hawaii from 1/8th to 1/16th? They'd want to sell compute from their highest end card after all, so that seems like an obvious savings.

Hawaii is 1/2 FP64 rate, 1/8 was for the gaming product...But the core is the same, fp rate is just disabled, so if they release a Firepro with again 1/2 dp rate, there will not be an advantage on it on the gaming product ( assuming it is the same core )
 
There's the colour compession technology of GC1.2+ to be considered as well. So the bandwidth wouldn't just be 4x higher than the PS4, it would potentially be a lot more in real world scenario's. The expected spec of the 390x also suggests something in the 8TF+ range (4096 shaders at around 1Ghz) which is also comfortably more than 4x the PS4.

I also don't think 4K requires linearly 4x more processing power to achieve than 1080p. Obviously on paper it does but performance rarely scales so linearly with resolution. In fact we already see arguably greater than PS4 1080p performance from the 290X at 4K which is far less than 4x a PS4.

So in short, I'm expecting the 390x to handle PS4 games, even if they run at 900p on the console with relative ease at 4K.

What's more is the ps4/xbox one effect. Last gen pc was held back by engines being designed and tweaked for the limitations of 2005 hardware that existed in consoles. This generation the same will happen. The 390x is only the first of the HBM products

I don't doubt the 390x will be 4x faster than the ps4 . But we aren't even 2 years into the life span of the ps4 and we are looking at 4 times the power existing (if not more) What will year 3 and 4 look like.

But the question remains ..... Will this run star citizen ?
 
I am almost certain AMD is not releasing a full line up of cards by june. Heck I think they might delayed the 3xx series till July or later.

The 2nd quarter guidance is is showing total revenue to shrink vs 1st quarter which itself was horrible(from 1.24 billion to 1.03 billion). Next quarter, they are expecting a drop down to about 990 million. Considering the 2nd quarter is often a big quarter due to tax returns and the release of some big games, how is AMD revenue shrinking further if they were supposed to launch new cards from top to bottom?

My guess is because they avoided talking about their next gen in the 1st quarter results conference call, fiji and co got delayed till July. Nothings coming out from AMD.
 
I doubt Fiji alone would be enough to make much of a difference anyway: it's a very high end product (i.e. very low volume) and it's not likely to have any major advantage over GM200.

This doesn't bode very well for Carrizo, however.
 
In any case, AMD will hold an Analyst Day on May 6. I trust they'll shed some light on their plans.

Its the words of Lisa Su, " annuncement later this quarter." I still think they are on tracks for Computex launch.

I will be honest, any case this will be a major error to annonce a product, 1-2 months before it is released. competitors will really like it.
 
AMD's primary motivation is convincing investors and creditors that they have a future and might give them some sliver of uplift from their current trajectory.

AMD's position is such a non-factor in all its markets that whatever incremental benefit in x% more sales on an evaporating market share is swamped by anything that could get a bone thrown to them from the financiers.

They've politely swallowed every promise AMD has flubbed or delayed by two years on those conference calls, so why not one more try? Give all the upcoming quarter's launch schedules and surprise features, give a roadmap for products in 2016 and we'll see them in 2017 if the designs aren't cancelled. The folks that make those nifty analyst powerpoints are just about the only money-making group AMD has.
 
To coincide with the Windows 10 release perhaps. It could be advantageous from a marketing perspective to release a full lineup of DX12_0+ feature level cards alongside the official DX12 release.
Windows 10 will be august/ sept . End of June / July makes sense to get the graphics cards out. You don't want to release and be sold out with nothing to sell. So you get the intial pop out of the way and then restock for new OS
 
Each month passing by without Fiji being released is another big weight over AMD's dependence on Pascal being a flop... which is a terrible position to be in.
The Pirate Islands' window of opportunity is growing really thin.
 
Each month passing by without Fiji being released is another big weight over AMD's dependence on Pascal being a flop... which is a terrible position to be in.
The Pirate Islands' window of opportunity is growing really thin.

Good thing Pascal isn't going to come out until next year then, and odds are it's just Maxwell on "14nm" with HBM. The "14nm" process from Samsung/TSMC is too complex to change much from the second release of Maxwell otherwise, the lead times on design, tape out, and manufacturing are quite a bit higher than 28nm.
 
Each month passing by without Fiji being released is another big weight over AMD's dependence on Pascal being a flop... which is a terrible position to be in.
The Pirate Islands' window of opportunity is growing really thin.

Well, lets not start with 2016 things allready ( without knowing if we are speaking about first quarter, half or end of 2016 ).. There's allready enough to speak with 980TI ..At this point it is like saying the 980TI is approaching dangerously of the timeframe of Artic Island 2016 GPU's.
 
Good thing Pascal isn't going to come out until next year then, and odds are it's just Maxwell on "14nm" with HBM. The "14nm" process from Samsung/TSMC is too complex to change much from the second release of Maxwell otherwise, the lead times on design, tape out, and manufacturing are quite a bit higher than 28nm.
Last I heard, nVidia wasn't the one with declining GPU marketshare, a terrible fiscal 2014 financial showdown and a GPU line-up mostly made of ancient and/or power-inefficient chips.
nVidia can afford for Pascal to be "just Maxwell on 14nm with HBM". It seems to me that they worked their asses off to achieve that comfort.
AMD is the one playing catch up.

Not to mention that 980 Ti may already be tough as hell to compete with on the high-end.

Well, lets not start with 2016 things allready ( without knowing if we are speaking about first quarter, half or end of 2016 ).. There's allready enough to speak with 980TI ..At this point it is like saying the 980TI is approaching dangerously of the timeframe of Artic Island 2016 GPU's.
The Titan X is out there, so the 980 Ti is probably finished and waiting at the door for AMD to release Fiji.
The later nVidia can release the 980 Ti, more money they'll do with the Titan X.
 
Last I heard, nVidia wasn't the one with declining GPU marketshare, a terrible fiscal 2014 financial showdown and a GPU line-up mostly made of ancient and/or power-inefficient chips.
nVidia can afford for Pascal to be "just Maxwell on 14nm with HBM". It seems to me that they worked their asses off to achieve that comfort.
AMD is the one playing catch up.

This has little to do with lead out and design times required for 14nm, you can't really just "throw more money at the problem" to simultaneously switch to a new and far more complex patterning, masking, and manufacturing scheme while simultaneously redesigning a large portion of your GPU pipeline and expect it to "just work".
 
Back
Top