AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

Well, logo on the picture points toward 380, not 390.
http://www.techpowerup.com/img/15-04-09/20b.jpg

Does it?
First it seemed clear it was 390, then when it was said it was 380, it looked more like 380 - but which one of these is 8?
xfxradeon.jpg
 
It's 390. The First 9 looks more like an 8 than the second when zoomed in and I highly doubt that it's an R8 380...
 
The First 9 looks more like an 8 than the second when zoomed in and I highly doubt that it's an R8 380...
Presenting the new greatly simplified AMD naming system:
R9 390, R8 380, R7 370, R6 360, R5 350, R4 340, and R3 330.

This would be simplified even more on the next iteration:
R4.9, R4.8, R4.7, R4.6, R4.5, R4.4, R4.3

And finally, as simple as:
R9.5, R8.5, R7.5, R6.5, R5.5, R4.5, R3.5.

Anyone in for an AMD R9.10X 128GB graphics card in the year 2025?
 
Point one that doesn't match. Taking into account that the "390" card clearly has a VRAM cooling plate (that can also be seen from the 2nd picture) attached and covering some components entirely.
 
Point one that doesn't match. Taking into account that the "390" card clearly has a VRAM cooling plate (that can also be seen from the 2nd picture) attached and covering some components entirely.
The small components (what do you call those? the tiny ones on pcb) next to the cooling plate, for VRAM or whatever it is, don't match the R9 290X DD. There's 2x2 of them in the claimed R9 390 and 2x4 on R9 290X DD

edit:
http://images.bit-tech.net/content_images/2014/05/amd-radeon-r9-280-review-feat-xfx/xfx280-11b.jpg

Also that "something" on top of the coolingplate on R9 290X DD seems to be missing.


The white area on the edge could very well be edge of an interposer, seems big enough
 
That's a 280X / 280 DD (and actually has some clearly noticeable differences compared to the 290X pcb)

Here's the only 290X DD picture I could find (same as the one in my comparison) http://i.imgur.com/2H1UNOk.jpg

Either way this is how I see the "390" and the 290XDD: http://i.imgur.com/5WhFr7N.png

It's just so similar (and I still can't really see any differences, maybe I'm blind) that I can't help but feel that it's a hawaii card. Photoshopped name or not.

edit: somewhat ninja'd
 
Uh oh, yeah, my mistake there, wrong picture (that for whatever reason comes up with r9 290x dd search, didn't check the obvious marks like clearly wrong GPU) :oops:
 
I think HBM1 and HBM2 are part of the same lineage,
The more I think about it, the more true that seems to be - they're point-versions apart in terms of technology, e.g. 1.0 and 1.1, but calling them 1 and 2 has a warm and fuzzy marketability to it.

and HBM1 did not come out as early as was hoped. GDDR5 was not succeeded by a GDDR6, and it has hit speed grades that were not originally projected and lasted longer than most graphics memory standards did.
Old threads are so much fun, I get lost in them for hours at a time:

Nvidia GT300 core: Speculation

NVidia never did the memory hub. And, Aaron was convinced that GDDR5 would never get to the speeds it's now reached. Differential signalling was the dead-certainty. Did XDR2 ever appear in a product? Dare I utter the word "Rambus" :mrgreen:

Lessened enthusiasm and the continued polishing of GDDR5 could have tamped down the upside of the first gen, so we may need to see who else but AMD may adopt it.
We haven't heard about anyone else using HBM1. With seemingly such a short life ahead of it I can't help thinking it'll be AMD only.

Is there some way that HBM1 could be a cheap solution in the HBM2 era? e.g. at the end of 2016 would there be $200 cards introduced with a single 4GB HBM1 stack?

Could we see HBM1 stacks as salvage variants of HBM2? e.g. the stack consists of HBM2 dies, but with large chunks turned off or de-rated, therefore good for only HBM1?
 
The more I think about it, the more true that seems to be - they're point-versions apart in terms of technology, e.g. 1.0 and 1.1, but calling them 1 and 2 has a warm and fuzzy marketability to it.
It may be more than 0.1, if it turns out that HBM1 is SDR and HBM2 is DDR. That seems like enough of a change for a .5 or more. The legacy mode, if that turns out to be an actual legacy mode, might speak to a planned evolution for the standard. Maybe the original intent was for HBM1 to serve as the initial somewhat-rough effort that would have gone into graphics earlier, and the legacy mode is something to compensate for the time lost.

Old threads are so much fun, I get lost in them for hours at a time:

Nvidia GT300 core: Speculation

NVidia never did the memory hub. And, Aaron was convinced that GDDR5 would never get to the speeds it's now reached. Differential signalling was the dead-certainty. Did XDR2 ever appear in a product? Dare I utter the word "Rambus" :mrgreen:

I am not aware of a XDR2 being used. The motivation for the hub concept seems lost, since memory didn't change much in the years since. HMC is something like it, but it uses stacking to eliminate one half of the high-speed IOs that the hub concept presented.
IBM's Centaur memory buffer may mean that HPC installations with Nvidia will have a buffer on the CPU side, although the reasons would be different.

In the face of the physical constraints of transmitting over a PCB, GDDR5 has managed to use 6 years to hit the 7 Gbps threshold, with potentially some future product nudging to 8 Gbps.

Perhaps the skepticism was stemmed from the expectation of a GDDR6 in the intervening half decade, which could ill afford to start at the top end of GDDR5 with nowhere to go without differential signalling.
Hybrid memory cube is the next high-speed over PCB standard, which is differential.
HBM gives up on the PCB and the high speed.

I was uncertain at the time if another vendor would want to get in on an AMD memory standard, given how unhelpful it turned out to be for the last one. HBM seems to indicate there was still a case for such a collaboration on niche memory, while HBM1 vs HBM2 feels like the same conditional benefits are in play.

Could we see HBM1 stacks as salvage variants of HBM2? e.g. the stack consists of HBM2 dies, but with large chunks turned off or de-rated, therefore good for only HBM1?
If the signalling is different, I am not sure how that works unless legacy mode effectively behaves like SDR HBM2.
 
It may be more than 0.1, if it turns out that HBM1 is SDR and HBM2 is DDR. That seems like enough of a change for a .5 or more. The legacy mode, if that turns out to be an actual legacy mode, might speak to a planned evolution for the standard. Maybe the original intent was for HBM1 to serve as the initial somewhat-rough effort that would have gone into graphics earlier, and the legacy mode is something to compensate for the time lost.
It is DDR. I mean... everyone can visit JEDEC's web site and get a copy of the JESD235 HBM DRAM standard. To me, HBM 1 or 2 is likely just about SK Hynix's own implementation of HBM.
 
It is DDR. I mean... everyone can visit JEDEC's web site and get a copy of the JESD235 HBM DRAM standard. To me, HBM 1 or 2 is likely just about SK Hynix's own implementation of HBM.
That's said after giving it a re-read, it feels that HBM2 might be an update to the specification that is still in the standardization pipeline, or either it is a vendor-specific version that is still backward compatible with the "legacy mode" (JESD235). At least I saw no lines about 1KB page size or 64-bit DQ in the document. The webpage of Cadence's HBM Controller IP mentioned pseudo channel mode, though.
 
It is DDR. I mean... everyone can visit JEDEC's web site and get a copy of the JESD235 HBM DRAM standard. To me, HBM 1 or 2 is likely just about SK Hynix's own implementation of HBM.
If both are DDR, then the two gens sound more similar than the marketing revision would imply. The older slides on HBM put the data rate about half of what was planned for HBM2, so I interpreted that gap as being related to a shift there.

I have not registered an account on the JEDEC site in order to download the pdf. I admit I should to get the most clear data, but I am sluggish when it comes to registering for documentation.

edit:
spelling
 
Last edited:
Looking through the number, and assuming the 390x is indeed about the same performance level as the Titan X in terms of pushing out compute/shaded pixels, then while the 390x may indeed have more bandwidth than is needed for 4k it might not be by much, if at all.

Considering 640gbs is, a good multiplier of the PS4's that synchronizes with the jump in resolution from 1080p to 4k, and the straight compute power of the Titan X compared to the PS4 is within similar though not linearly scaled bounds, then it'll be up to the 390x amount of ROPs and clockspeed to see where the bottleneck is. Of course the "4k!" performance might not last, as we'll see games run at < 1080p as time goes on with the PS4. Inevitably better pixels can just be traded off for less of them, and like the last generation of consoles that will probably just happen more and more as time goes on. I wouldn't be surprised to see a lot of 900p and less games running on the PS4 by the end of the generation.

Regardless, depending on the title the huge bandwidth advantage HBM will confer could give a large advantage to the 390x at higher resolutions. Post Script: Looking at Hawaii, what sort of silicon advantages would AMD see if they cut FP 64 on Hawaii from 1/8th to 1/16th? They'd want to sell compute from their highest end card after all, so that seems like an obvious savings.
 
Back
Top