AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

I think we can agree that Fermi and Kepler and big Maxwells were major jumps, but what about gm107?
Big Maxwell is a very very small jump - a bit more than an upscaled gm107, yes, but not that much more. gm107 though was definitely a big jump.
Anyway, in case of rv740 I guess it didn't really work out that it should fare well from the start, the chip never really saw widespread use though there might have been other reasons for that.
 
Big Maxwell brought the 2x FP16 performance from Tegra X1.
If a game uses that extensively (ahem Gameworks ahem), then it might show a sizeable performance/area and performance/power boost.
 
Big Maxwell brought the 2x FP16 performance from Tegra X1.
If a game uses that extensively (ahem Gameworks ahem), then it might show a sizeable performance/area and performance/power boost.
In terms of architecture this is a rather small change though.
 
Big Maxwell brought the 2x FP16 performance from Tegra X1.
If a game uses that extensively (ahem Gameworks ahem), then it might show a sizeable performance/area and performance/power boost.
I think this is mistaken. Pascal has the 2x FP16 throughput, but Maxwell doesn't - except for TX1.
 
I really thought the big Maxwell had the 2*FP16, since Jen-Hsun Huang kept bragging about it during GDC..
My mistake.


Why do you need to "clear the channel" if you are rebranding existing products?
It is a new family likely with 3, maybe 4, ASICs.
The rumors saying otherwise are sure being persistent, then..
Most websites are saying that:
R9 390 is Fiji
R9 380 is Hawaii
R9 370 is full Tonga
R7 360 is ancient old senile Pitcairn
R7 350 is Bonaire

"Clearing the channel" could be something as simple as getting the graphics cards back, burning a new firmware in their EEPROMs and putting them inside a new box.
 
Getting the graphics cards back and burning new firmware is not a simple operation. It's a logistics nightmare. And it's costly as hell.
 
Getting the graphics cards back and burning new firmware is not a simple operation. It's a logistics nightmare. And it's costly as hell.

I meant "simple" as easily explainable. It should be a logistics nightmare, yes.
I was just saying that delaying the launch of the new "family" because they're clearing the channels of the old cards could mean that they're not waiting for the cards to be sold at all.
It could be that they're just bringing them back to put a different stamp in them and get them back into the shelves.
 
A family rebrand just doesn't make sense.
The only explanation that doesn't consist of clickbait BS, is that someone was defining the performance range of the new family and the recipient misunderstood.
 
A family rebrand just doesn't make sense.
The only explanation that doesn't consist of clickbait BS, is that someone was defining the performance range of the new family and the recipient misunderstood.
A family rebrand on the same process makes total sense if you don't have major architectural improvements. Maxwell warranted the expense of redoing their full product lineup in the same technology.

The key question is if AMD has something similar as well. If not, what'd be the point of redoing?
 
A family rebrand on the same process makes total sense if you don't have major architectural improvements. Maxwell warranted the expense of redoing their full product lineup in the same technology.

The key question is if AMD has something similar as well. If not, what'd be the point of redoing?

Surely between full DX 12_1 support, the greatly enhanced memory bandwidth efficiency of the new colour compression tech, true audio and (I'm not totally sure on this one) freesync support, the answer is a clear yes.

Granted some of the refreshed parts will support some or all of those technologies but its going to be a total mix and match with attrociously old 360 not supporting any!
 
Surely between full DX 12_1 support, the greatly enhanced memory bandwidth efficiency of the new colour compression tech, true audio and (I'm not totally sure on this one) freesync support, the answer is a clear yes.
Of all those, the only thing that could really help performance is the color compression. The rest are non-essential side features of varying importance. FreeSync would be nice, but is that worth redoing a whole chip for when you're desperately short on resources that are needed to make the next gen on a smaller process?
Though decision must be made sometimes.
 
Surely between full DX 12_1 support, the greatly enhanced memory bandwidth efficiency of the new colour compression tech, true audio and (I'm not totally sure on this one) freesync support, the answer is a clear yes.
Also XDMA Crossfire, HSA improvements, faster tessellation, mixed rotation Eyefinity, 4k VSR, HDMI 2.0, VCE & UVD improvements and H.265 decoding support (if the leaked slides are accurate).
 
Last edited:
Also XDMA Crossfire, HSA improvements, faster tessellation, HDMI 2.0, VCE & UVD improvements and H.265 decoding support (if the leaked slides are accurate).
Bonaire, Tonga, Hawaii and Fiji have XDMA CF, only Pitcairn (and low-end Oland) wouldn't have it. Faster tesselation isn't really needed below Tonga, while Hawaii already has 4 primitive pipes and larger cache for buffering compared to Pitcairn/Tahiti, so I guess it passes as "good enough" in that regard, certainly not an important enough factor to refresh Hawaii just for better tesselation. And HSA improvements are - let's face it - currently irrelevant for games.

The only chip outdated enough to hypothetically warrant a replacement with Tonga IP (or newer) is Pitcairn, but delta color compression would be pointless here unless they take the Nvidia GM206 route (128 bit interface to reduce cost), since 20 CUs lack the raw horsepower to benefit from 256bit+DCC.
Similarly, Tonga's improved tesselation performance compared to Tahiti is mostly a combination of larger perimeter cache (and larger L2) and doubled primitive/geometry pipelines. Both of these cost transistors and area, with only minor benefit for a mid-range chip like Pitcairn.

So the only real benefit of a Pitcairn replacement with newer IP would be improved connectivity and media en-/decoding. Is that alone worth the resources needed for making an entirely new chip? I'd say no, not if you are in such a tight spot as AMD is in right now.

Replacing Pitcairn would only make sense if they were able to notably improve performance per Watt and mm² as well, but I wonder whether the latest GCN IP is enough of an improvement in that regard.
 
Feature level 12_1 brings some pretty desirably features from a performance perspective as far as I'm aware. It'll be a a real shame if those features don't get taken up by devs because AMD's lower end chips don't support them.
 
Then there's the age-old moral issue of re-branding something old (sometimes very old) and calling it new. It has never really been ethically acceptable IMO, ever since GPU makers first started doing this.

It's old stuff. None of it is gonna get any newer just because you put the same old shit in a new flashy box and make up some excuses how it's really not that a big deal. Having a "new" card which doesn't even support your own technologies, that were either announced, or even released a year and a half - if not more - ago is reprehensible. A new low in a history of lows.
 
Also XDMA Crossfire, HSA improvements, faster tessellation, HDMI 2.0, VCE & UVD improvements and H.265 decoding support (if the leaked slides are accurate).
Hawaii and Tonga already have XDMA. That covers most of the target market for Xfire. HSA and faster tessellation are niche features with limited gaming performance upside. HDMI 2.0: only useful for 4K TVs. Monitors use faster DP.

I'm not saying it's an ideal situation, and maybe they'll have meaningful refresh at 28nm. But if your R&D dept only has enough resources for 1 new family, going all in on 16/14nm is the logical choice. Hell, maybe they'll surprise us with a smaller process much earlier than Nvidia.
 
Then there's the age-old moral issue of re-branding something old (sometimes very old) and calling it new. It has never really been ethically acceptable IMO, ever since GPU makers first started doing this.
Moral this moral that. If you're bleeding money, the moral thing is to do take the course with the highest odds of survival. Wasting resources on a process with a short remaining lifetime may not be that course.
 
Back
Top