AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

R9-380x is a rebranded R9-290x
hm... rebranded and with 2 disabled MC's ? :???: Sounds silly. Seems a waste of I/O if it's the same die with the extra 4CUs enabled. How about a die shrunk 290X (hence 384-bit I/O) that's fully operational (48CU). :p

Bleh, rumours.

pirate.gif
 
But actual FPS in games? Of cards that are not even close (at least 3 months away) of being released?

The point in time of launch doesn't coincide with the actual point in time when the chips are born. That said, many months (if not a year or two or even three) in advance, at AMD's headquarters it is already known their target performance of projects and design.

Ask yourself but more deeply.

I am not saying in any way that what's written there is true and correct, just that your argument doesn't make sense. ;)
 
The point in time of launch doesn't coincide with the actual point in time when the chips are born. That said, many months (if not a year or two or even three) in advance, at AMD's headquarters it is already known their target performance of projects and design.

Ask yourself but more deeply.

I am not saying in any way that what's written there is true and correct, just that your argument doesn't make sense. ;)

Yes, but as far as I know the guy that wrote that is speculating? It's not someone saying they actually have inside info? Further, he is speculating both on supposedly coming nVidia and Amd cards.. He does not have a clue and he is taking FPS numbers out of his backside.
 
Last edited by a moderator:
He also clearly has no knowledge of the capabilities of current ASICs.
Rumors wouldn't be nearly as fun for the readers if the originators actually knew enough to create reasonable rumors. It's the unreasonableness of their predictions that keeps everyone on edge.:p
 
I dont know, AMD have allready show numerous of time stacked memory prototype, but this was for their APU, not "desktop gpu".. ( and first time they have show it, this make a lot of years now ( maybe on the 5870 period ).

The question is more if it is possible or not to push it in mass production right now.. For be honest, i dont really trust any of the specifications posted, both Nvidia and AMD.. look more like some guys have some time to loose trying to calculate the improvement possible from 28 to 20nm and imagine some specifications who can fit in. ( allready the 4224SP is a strange number, this mean using 96SP instead of 64SP / CU. for a total of 44CU .. i dont know if it possible for the front end to deal with 66CU, this is a massive number ..

The next gpu should be on GCN 2.0, and i have absolutely no idea if it will just be an optimized GCN1.0 /1.1 or more .. we can imagine Hawaii or the architecture who should have been released at this time should have been a GCN 2.0.. so how much change have been made in the future GCN 2.0 ?

Both Nvidia and AMD have completely need change their plan or delay them in 2013.. ( maybe the reason why Nvidia introduce Pacal instead and delay Volta of one generation ).. In general we have a little idea of what happend, but right now i cant tell what direction it take
theres many thing in conjunction right now, for every company ( intel include ), Mantle, DX12, new OpenGL extensions, new process, we project GDDR stacked since a while ( remember the 7970 rumors with stacked memory ), HSA, 4K monitors are becoming affordable, ..
computing wise many things is evolving.
 
Last edited by a moderator:
I had to stop by and see what people made of these supposed specs...sounds kind of odd.

I can get behind the plausibility of what's rumored from nvidia...low clocks and low voltage to maximize benefits of 20nm...perhaps an extra unit for yields over what seems logical (just like gk110)...It makes sense that 'if' a truly efficient 64 ROP part (where 25 SMM would be for all intents and purposes similar to 4000sp from amd, but in reality I would think 24 would be more efficient) were coming on 20nm it would be at the minimum voltage(.9v)/yield(-20% clock, ie similar to 28nm clock) spec, but with space to breathe (230w with scope up to 300w would do just that). I can also get behind the possibility of a 256-bit bus (4GB) because of die size and aiming toward 4x resolution of the console standards (inevitably 1440p vs 720p, but probably touted as 4k, 1440p, and 1080p products).

Not saying it's the correct spec, but it seems plausible within the parameters and past things nvidia has done.

But this...is pretty wonky. I really question how a 512-bit bus would work on 20nm. 16nm or rather 20nm finfet perhaps (power savings but little in the way of transistor density), but 20nm (which is the opposite)? We're talking very small transistors (1.9x quoted scaling?), 30% power savings at the same voltage, and/or somewhere >20% and <25% clock scaling at a similar voltage iirc. Now, take into consideration we're talking .95v versus .9v, and also likely stricter tolerance toward the top (1.2v+)

Big dies seem unlikely imho.

High clockspeed? Maybe, but in theory not optimally. Only if it fits with the many factors amd has considered in the past (just big enough die to support a memory controller that can feed a certain number of units at a high clock for a process within the tdp that product is aimed in the market) and everything meshes out perfectly. This is what would have to be true of a 1536sp part (within 150w?) and that seems a dangerous game to play between nvidia perhaps shooting for the equivalent to 960 and 1920.

I would figure ideally they would want what is the closest to approx the optimal rate for each rop set (somewhere around 896-960 for 16, 1792-1920 for 32, +-2816 for 48, and 3840 for 64) but of course there are other factors (power consumption of lower voltage but also more dense memory, ideal performance per die size/tdp). You of course have to consider each 16 rop set has an equal amount of CUs, so for 48 rops it would be somwhere around 2688/2880/3072, for instance, but ideally probably 2880. To reiterate, nothing is ever as simple as it seems and I think Pitcairn is a prime example of that, but would amd go the same route for both mid and upper-end gpus that they went with Tahiti, especially after the public mauling they received when nvidia went for the under to their over and stayed (at least initially) under 225w? I understand keeping good yields (both in terms of either good units or lower clock) on a new process is probably part of it, as is extra instruction power (or even texture processing power given unit structure) to the competition may justify the means...but I just wonder.

I think nvidia's maxwell is pretty black and white...scaling from 6smm, 12, etc...probably even down to 3smm/8 rops for SOCs, which (I hate to say) would seemingly be more efficient than something like Kaveri...especially if they kept the cache unscaled to subsidize the bandwidth available on such platforms. The performance from 750ti gives away how much bandwidth help the cache gives, and I think ideally they're shooting for 6smm (768sp + 192sfu...or similar to 960sp), a very attainable clockspeed and and low-power memory within 75w...perhaps scaled up exponentially until the point more logic (even if redundant) makes more sense than trying to get yields on less and be trying to be able to up the clock.

I'm not even going to pretend I know the intricate differences and weights of die size, memory controller speed/size, cache, ram density considerations, vdd and vram, unit counts, yields considerations, and/or how they effect end product decisions...especially compared to engineers up close and personal with the new tech and that have done a great job of for the most part maxamizing performance to die sizes and tdps in the past while trimming unnecessary excess typically to our benefit of lower prices....but it just seems compared to what nvidia may seemingly do this could potentially be a little messy...if it's true.
 
We're gonna need faster CPUs.

Or more games written to mantle or a more CPU efficient version of an existing API like DirectX or OpenGL. With mantle you get relatively GPU limited with much less powerful CPUs.

In other words, please dear god let Dx9 and Windows XP die already.

Regards,
SB
 
Or more games written to mantle or a more CPU efficient version of an existing API like DirectX or OpenGL. With mantle you get relatively GPU limited with much less powerful CPUs.

In other words, please dear god let Dx9 and Windows XP die already.

Regards,
SB

From the end of 2015 onwards thanks to DX12 likely being used widely or even exclusively across new games it's gonna be kinda like we all get 50% - 100% faster CPU upgrade for free.

I'd still like to see some serious CPU power upgrades though.
 
Or more games written to mantle
All Mantle games right now aren't really written for Mantle, Mantle was just added as a tacked on feature at the last minute. So what we get is games that are 95% GPU limited at max settings. Except for the Star Swarm test of course, and even that didn't make a strong case for the API, considering the optimized DX11 path (via the driver) can produce faster (or at the very least equal) performance.

The more I see on this matter, the more I become convinced this whole thing is blown out of proportions. Yes there is a CPU overhead, and an driver overhead too. But they are fairly modest and can be circumvented. What we actually need is more games that are multi-core aware (and an API that don't get in the way of course). The code needs more parallelization to use more cores and exploit further threads.
 
All Mantle games right now aren't really written for Mantle, Mantle was just added as a tacked on feature at the last minute. So what we get is games that are 95% GPU limited at max settings. Except for the Star Swarm test of course, and even that didn't make a strong case for the API, considering the optimized DX11 path (via the driver) can produce faster (or at the very least equal) performance.

The more I see on this matter, the more I become convinced this whole thing is blown out of proportions. Yes there is a CPU overhead, and an driver overhead too. But they are fairly modest and can be circumvented. What we actually need is more games that are multi-core aware (and an API that don't get in the way of course). The code needs more parallelization to use more cores and exploit further threads.

And as I mentioned as more games get written to take advantage of Mantle the CPU burden should decrease even further than it has already.

With rather large gains (reductions) in frame times already observed when you aren't primarily GPU bound, I really don't see the assertion that you are trying to make.

Yes when you are already mostly GPU bound with a high performance CPU (like a 4770k or whatever) then your gains will be limited to 5% or less. But many sites and many benches have shown that once you get more and more CPU bound versus GPU bound the gains start to increase dramatically.

And as you seem to have blinders on, you completely ignore the fact that other API's will be adopting something similar to what Mantle brings today, only we'll have to wait a year or two for it in the more widespread API's like DX.

I understand that you, personally, don't like Mantle. That still doesn't magically make the benefits it brings go away. Nor does it make the benefits disappear for future APIs when they also follow suit in making graphics tasks less reliant on the CPU.

In other words it's not just "Mantle." Mantle just happens to be the only current solution that shows the future benefits we'll get once more and more games start to take advantage of API's that make graphics rendering less reliant on the system's CPU.

Or in other words, why get a 300+ USD CPU to game on when you can get almost the same performance with a 100 USD or less CPU. And translated into future terms, you won't need massively more CPU power to take advantage of massively more power GPU hardware. So yes, for current games, it's not necessarily something you can do. But in the future? Which was what we were talking about, it's far more likely.

And that applies to both Nvidia and AMD hardware. Again with the assumption that DX/OpenGL/whatever move to adopt many of the things that Mantle brings today. This isn't only about Mantle. :p

Regards,
SB
 
All Mantle games right now aren't really written for Mantle, Mantle was just added as a tacked on feature at the last minute. So what we get is games that are 95% GPU limited at max settings. Except for the Star Swarm test of course, and even that didn't make a strong case for the API, considering the optimized DX11 path (via the driver) can produce faster (or at the very least equal) performance.

The more I see on this matter, the more I become convinced this whole thing is blown out of proportions. Yes there is a CPU overhead, and an driver overhead too. But they are fairly modest and can be circumvented. What we actually need is more games that are multi-core aware (and an API that don't get in the way of course). The code needs more parallelization to use more cores and exploit further threads.
Games already are multi-core aware.
 
Yes. Consoles have been significantly multicore for the past few generations, and with their per-core capabilities games certainly have to be.
 
And as I mentioned as more games get written to take advantage of Mantle the CPU burden should decrease even further than it has already...
I am all for games written from the ground up for Mantle to increase graphics fidelity and performance, But lets not kid ourselves here .. Mantle on this state right now (as a tacked on feature) offers little more than some free fps that could have otherwise been obtained through a driver upgrade.

The fact that it lowers CPU requirements (mostly AMD's) at medium or low settings doesn't mean much in the way of pushing games complexity or visual quality further. That can only be obtained through proper use of available CPU threads .

Mantle right now doesn't even enable games to exploit more CPU cores, all it does is relief some of the single threaded overhead (I would love to be proven wrong here), And if DX12(or whatever) is headed into a similar direction, then it's applications will be of limited use as well.

If you are only relying on reducing CPU load then no matter what you do, this will not get your further much. Next gen games demand increased simulation complexity, Graphics hardware are becoming powerful exponentially fast. You can't just depend on a mere reduction of CPU load and call it a day. You need proper multi-core utilization as a first and foremost priority.

And that is the whole point: if reducing overhead becomes the final goal of any API, then this is nothing more than a propaganda tech talk. importunate but still superficially relevant. But if the goal is to reduce overhead to unlock more core utilization, then this is the right track. This is the future.

Games already are multi-core aware.
Yes, but not properly on PCs. Through a combination of a non cooperative API, and laziness or simply carelessness, they only hammer a single thread while leaving the rest sleeping. And even with that most of them only use 3 cores.
 
Back
Top