AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

That would be the case of expectations of being at the top for a reasonable time being part of the purchase of a top-priced product. People are paying serious-business money for an impractical pursuit, with an ephemeral status or benefit, which leaves few motivations that aren't close to some really basic emotional drives.

Do you really think it's fair to generalize like that?
I think most people buying top-end, $700 to $1000 graphics cards are already expecting them to lose the crown for the top-end of the next generation that will come out within a year or 6 months.
In this particular case, people saw their 8 to 12 month-old $1000 graphics cards performing as good as the new $200 mid-range.
Is that all the same to you?


If there happened to be a credible alternative. Some no doubt were burned by the other alternative's multi-GPU implementation that lost half their frame gains.
Not considering the ancient Fury MAXX or 3dfx solutions, we've had dual-GPU cards for 9 years continuously, for every generation.
Pretty much anyone who reads a single review for a multi-GPU card will know that these cards will only hold a bit over half of their potential until the IHVs release a driver with a dedicated path for each particular game (if ever).
So burned or not, I think people paying >$600 for a dual-GPU graphics card are already counting on that caveat.


If I'm paying for a top-end card, you're going to be hard pressed to tell me that I "only" need 4GB.
At the same time they're saying you only need 4GB, they'll also have to explain you why the lower range, supposedly costing half as much, are getting 8GB.
 
Last edited by a moderator:
Do you really think it's fair to generalize like that?
I think most people buying top-end, $700 to $1000 graphics cards are already expecting them to lose the crown for the top-end of the next generation that will come out within a year or 6 months.
In this particular case, people saw their 8 to 12 month-old $1000 graphics cards performing as good as the new $200 mid-range.
I misread which Titan line you were referencing. I thought you referring to the grumbling over the distance between Titan X and the 980 Ti.
The complaints with Kepler's high-end versus the more humble Maxwell implementations would be a case of expecting to lose to the next high-end. Special-case software regressions are a problem if the the new mid-range cannot reasonably be expected to be a general replacement outside of it.

Not considering the ancient Fury MAXX or 3dfx solutions, we've had dual-GPU cards for 9 years continuously, for every generation.
Nvidia made it a marketing point, and a decently successful one.
They made AMD look stupid, and looking stupid by association by paying out the nose for it is a bit of a downer.
 
How about we make a new thread discussing the benefits of >4GB of vram in games? It's one thing to discuss the technical limitations of HBM (and why they might exist and how to get around them), but debating the current need of >4GB is really a different discussion. Finally I think we can dial down some of the attitudes. It's borderline hostile for a relatively straightforward discussion.
 
At the same time they're saying you only need 4GB, they'll also have to explain you why the lower range, supposedly costing half as much, are getting 8GB.
I do not share your opinion in this case; there have been a multitude of past examples where "weaker" cards ended up with larger framebuffers. There were 7750's with 4GB which exceeded the original reference 7970 3GB model.

To the point: I'm educated enough that the extra VRAM isn't the selling point between two separate GPU's of disparate performance, but it very well might be the selling point between GPU's of disparate manufacture but similar performance...
 
Wow 4K performance should be pretty impressive. I wouldn't be surprised to see it comfortably beating TitanX at that resolution while being more even or slower at lower res. Here's hoping its faster across the board though.
 
That ROP count would have less bandwidth per ROP than Hawaii and Tonga as we know them, and much less relative to Tahiti.
Frame buffer compression might be useful in this case.
 
One of the larger single items for power improvement would likely be the memory interface, but it would indicate other sources of efficiency gain--assuming that the headline clock numbers are at the very least not any more misleading than the up-to clocks of the 290X.

Improvements in the physical characterization (unclear on whether this is GF or TSMC), better DVFS, dynamic clocking along the lines of Carizzo, architecture tweaks, might add to it. The TDP is 300W, which is a higher number but also a potential sign that their clocking scheme can take things even closer to the limit than before.

edit: closed parenthesis
 
Last edited:
Well, to be blunt, that perf/W may not be the "bigger story than HBM", rather it stands to reason that it's truly the story of HBM. Perhaps...
 
That ROP count would have less bandwidth per ROP than Hawaii and Tonga as we know them, and much less relative to Tahiti.
Frame buffer compression might be useful in this case.

I think it's likely that with the frame buffer compression, the effective bandwidth will be at least double Hawaii so in conjunction with the double ROP's that would likely be a really potent 4K combination. As you say though, it's still not coming close to the bandwidth per ROP of Tahiti.

With regards to the GFLOP/watt metric, I think it's far less important than overall performance/watt for any comparison to Maxwell.
 
I think it's likely that with the frame buffer compression, the effective bandwidth will be at least double Hawaii so in conjunction with the double ROP's that would likely be a really potent 4K combination. As you say though, it's still not coming close to the bandwidth per ROP of Tahiti.

With regards to the GFLOP/watt metric, I think it's far less important than overall performance/watt for any comparison to Maxwell.


Its a theorical metric, a simple calculation, but effectively this dont forcibly acount on "real " performance. ( but there again it is a bit complicate, depending the panel of games, benchmark, and this metric can move everywhere following this )
 
Tonga has way too little fillrate for its bandwidth, just compare with GTX960's fillrate and overall performance.

It is notable, though, that Titan X gives the appearance that it is bandwidth bound. Titan X has 3x GTX960's bandwidth, yet it only has 2.7x more fillrate. Which implies that GTX960 is more bandwidth-bound. Which would imply that with increased bandwidth GTX960 would be substantially faster than R9 285, even if in absolute terms it has lower bandwidth than Tonga.

At which point it's worth noting that Fiji would appear to have 2.9x the bandwidth of R9 285. Taken at face value that would imply Fiji has around 3x the fillrate of R9 285, which I reckon would correspond with 96 ROPs. So, it seems very likely to me that AMD will squander HBM with too little fillrate.

Also GFLOPS/watt is a pretty misleading measure of efficiency, since games aren't ALU bound.
 
Well, to be blunt, that perf/W may not be the "bigger story than HBM", rather it stands to reason that it's truly the story of HBM. Perhaps...

No. HBM improves perf/W significantly for the memory interface, but the memory interface is only a small fraction of overall GPU power. The shader array is dominant, and its perf/w is unrelated to the memory interface.
 
Also GFLOPS/watt is a pretty misleading measure of efficiency, since games aren't ALU bound.
The last 5(?) years, we've seen Fermi with vastly less GFLOPS than AMD being competitive. And we've seen Maxwell significantly outperforming Kepler with less GFLOPS as well, both still less than AMD. So, yeah, GFLOPS/W doesn't mean at lot for gaming performance. Still 300W wouldn't be half bad for Fiji...
 
Back
Top