AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Aye, the cost per transistor is down only 25% compared to 28nm I believe. A paltry sum compared to what used to scale for a nigh doubling in potential density.

And this paltry savings sum gets eaten right away by the generational upgrade. Because AMD is not selling transistors. They're selling GPUs and if customers are going to upgrade, these GPUs better come with 30,50 or even 100% more transistors in next gen.
 
So, August it is I guess. With the late release of the Fury X2 or whatever that makes sense. With Nvidia continuing to bring out new variations of Maxwell it doesn't sound like they'll be hitting the first half of 2016 with Pascal either. Unless they'll be releasing a compute only focused GP100 and saving binned cards for gaming until they get enough stock.

That's just depressing if true. I've been trying to keep my old 670 going (which is faulty and keeps crashing at that) until, Pascal arrives, but I'm really not sure I can deal with it for another 8 months. As a minimum they should be trying to beat or match Oculus to market since that's going to inspire a big surge in upgrades. Upgrades that will go to Maxwell or AMD if Pascal doesn't arrive in time. Although I guess if they're going to Maxwell anyway then why would NV care. In fact they may even want that so that people upgrade a second time when Pascal arrives (as some no doubt will do).
 
I think this supports my theory that Fiji will continue to be AMD's flagship throughout 2016 and the two new 14/16nm GPUs will be focused on power efficiency, one of them replacing Hawaii in its performance category and the other being something that approaches Tonga but is very notebook-friendly (~75W in the desktop version, 50W in the notebook version).

I don't think it supports that. I think it's fair to assume that dual Fury X will be their strongest card throughout next year when Crossfire works, as a single 400mm2 16nm chip won't be able to outperform two 600mm2 28nm GPUs. Unfortunately in a lot of new games Crossfire or SLI doesn't work very well or at all. VR could/should be a good showcase for dual GPUs in terms of performance and compatibility, that is why dual Fury makes sense to release alongside the VR-helmets.
 
That's just depressing if true. I've been trying to keep my old 670 going (which is faulty and keeps crashing at that) until, Pascal arrives, but I'm really not sure I can deal with it for another 8 months. As a minimum they should be trying to beat or match Oculus to market since that's going to inspire a big surge in upgrades. Upgrades that will go to Maxwell or AMD if Pascal doesn't arrive in time. Although I guess if they're going to Maxwell anyway then why would NV care. In fact they may even want that so that people upgrade a second time when Pascal arrives (as some no doubt will do).

Although worth noting NVIDIA skipped 20nm at a much earlier date while also being very vocal and critical of it and committed to 16nm quite a bit before AMD; this may still mean both have their GPUs prepared at the same time but it would be much more likely NVIDIA has priority on the 16nm silicon over AMD with TSMC, which would mean issues of supply there for AMD.
Although how this pans out with Apple gorging themselves on every bit of silicon they can will be interesting; wonder if NVIDIA has a strong penalty clause this time with TSMC and used the possibility of themselves switching to Samsung to get guarantees.
Ah well we will find out in 2016.
Cheers
 
Do the latest rumors not suggest that AMD is getting 16/14nm from both Global Foundries and Samsung (being the same process and all)? Or is that only for CPUs?
 
That's just depressing if true. I've been trying to keep my old 670 going (which is faulty and keeps crashing at that) until, Pascal arrives, but I'm really not sure I can deal with it for another 8 months. As a minimum they should be trying to beat or match Oculus to market since that's going to inspire a big surge in upgrades. Upgrades that will go to Maxwell or AMD if Pascal doesn't arrive in time. Although I guess if they're going to Maxwell anyway then why would NV care. In fact they may even want that so that people upgrade a second time when Pascal arrives (as some no doubt will do).
I upgraded from a 670 to a 970. It was an enormous leap in performance and allows me to use DSR in all but the most demanding games at max settings.

I may even skip Pascal if this 970 continues to dominate everything I throw at it. It's incredible what NVIDIA was able to do at 28nm with Maxwell. For the first time in a while they have simply blown the pants off of AMD. It's like 9800Pro vs FX series in reverse. Hoping this changes because AMD needs a break or we'll be in for $600 GTX1070.
 
Last edited:
Do the latest rumors not suggest that AMD is getting 16/14nm from both Global Foundries and Samsung (being the same process and all)? Or is that only for CPUs?
AMD has already confirmed that they'll be using 14nm FinFET (so GloFo/Samsung) for CPU's, APU's and GPU's - all 3.
“FinFET technology is expected to play a critical foundational role across multiple AMD product lines, starting in 2016,” said Mark Papermaster, senior vice president and chief technology officer at AMD. “GLOBALFOUNDRIES has worked tirelessly to reach this key milestone on its 14LPP process. We look forward to GLOBALFOUNDRIES' continued progress towards full production readiness and expect to leverage the advanced 14LPP process technology across a broad set of our CPU, APU, and GPU products.”
http://www.globalfoundries.com/news...logy-success-for-next-generation-amd-products
 
Although worth noting NVIDIA skipped 20nm at a much earlier date while also being very vocal and critical of it and committed to 16nm quite a bit before AMD; this may still mean both have their GPUs prepared at the same time but it would be much more likely NVIDIA has priority on the 16nm silicon over AMD with TSMC, which would mean issues of supply there for AMD.
Although how this pans out with Apple gorging themselves on every bit of silicon they can will be interesting; wonder if NVIDIA has a strong penalty clause this time with TSMC and used the possibility of themselves switching to Samsung to get guarantees.
Ah well we will find out in 2016.
Cheers

It's doubtful Nvidia will switch. The process difference between TSMC and Samsung is too different this node for easy portability. Apple went with both, but then Apple has infinite amounts of money to throw at problems, obviously far more than Nvidia even with Nvidia's profits of late. They'd have to re tape-out each GPU for Samsung in order to go with both vendors.

Of course as you point out Apple also has priority, and are apparently going with TSMC exclusively for the next iPhone, maybe even as early as the spring. So even though Nvidia has already shown off its engineering samples for Pascal there's no guarantee they'll be out significantly ahead of AMD's, if ahead at all.
 
Yeah they will not switch now as they have been named as one of the early TSMC 16nm clients, but they did put pressure on TSMC awhile back by also evaluating quite seriously Samsung - that was where I was coming from.
However no-one can assume NVIDIA will have issues with TSMC for the reasons I mentioned; they were the 1st to skip 20nm and commit to 16nm, and also they put a lot of pressure on TSMC with the real possibility all their 16nm GPUs would be done by Samsung.
I doubt TSMC really want to lose such a high profile discrete GPU manufacturer, so it will be interesting to see in 2016 whether they do have a penalty clause with TSMC and some kind of guarantees that really was not a possibility before (although I would say there are still question marks about the finfet low power design of Samsung and moving forward to a design suitable for large and high power GPUs.)
So one could say there are certain risks still for both AMD and NVIDIA but different factors.

It will be interesting either way, along with now a real differentiation of silicon implementation between AMD and NVIDIA; and at least us as customers should be winners as any difficulties experienced by one should not happen to the other this time (in terms of the same problems).
Cheers and a Merry Christmas :)
 
Notwithstanding the last two big releases from AMD, I was giving them the edge on getting their cards out sooner but if they are using a separate process entirely then that changes everything.

Any ideas as to how they compare, there were some reports that the apple SoCs showed better performance with TSMC?
 
TSMC has 16nm FinFET which is in no way related to Samsungs process. Samsung and GloFo use exact same 14nm LPE and LPP FinFET processes. There's no "easy portability" between them
I understand that.

But 'easy portability' is something that has gone away at least 10 years ago. No matter what you do, even when you went from, say TSMC 65nm to TSMC 55nm, you needed to tune your analog blocks. So from an analog point of view, you have to do the work anyway, but this is not something a company the size of Nvidia couldn't overcome if they wanted to. Furthermore, fabs often go to great lengths to make porting easy. In the end, they're still transistors, with pretty similar characteristics.

And the digital part is obviously a non-issue.

Not that I think that Nvidia will move away from TSMC (though they have been hinting at it in the past, probably to pressure TSMC), but let's not pretend that only a company the size of Apple could pull this off. It's not nearly as hard as some make it sound.
 
Is it unusual to fab a GPU intended for discrete cards on a LP process? I seem to recall that it is, but then I've read so many B3d threads and comprehended so relatively little of them that I'm not sure. I searched for a list of GPUs and their characteristic LP / HP etc process but I don't think I know the proper terminology. Related to that, is all fabrication becoming targeted towards low power; due to either market forces or feature size?
 
Is it unusual to fab a GPU intended for discrete cards on a LP process? I seem to recall that it is, but then I've read so many B3d threads and comprehended so relatively little of them that I'm not sure. I searched for a list of GPUs and their characteristic LP / HP etc process but I don't think I know the proper terminology. Related to that, is all fabrication becoming targeted towards low power; due to either market forces or feature size?
Yes, but there's no other options anymore, all the processes are more or less "low power processes" now thanks to huge demand of mobile (and now iot) chips
 
Is it unusual to fab a GPU intended for discrete cards on a LP process? I seem to recall that it is, but then I've read so many B3d threads and comprehended so relatively little of them that I'm not sure. I searched for a list of GPUs and their characteristic LP / HP etc process but I don't think I know the proper terminology. Related to that, is all fabrication becoming targeted towards low power; due to either market forces or feature size?
I'm not aware of an discrete GPU on a low power process. As important as power is, performance is still king.
 
I upgraded from a 670 to a 970. It was an enormous leap in performance and allows me to use DSR in all but the most demanding games at max settings.
670 at release was equal to 7970:

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_670/28.html

e.g. at 1920x1200 7970 is 1% faster.

Now, much less so:

http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/29.html

e.g. at 1920x1080, 7970 is 10% faster. Increase the resolution and the difference is about 18%. Taking 4MP gaming as the benchmark (since you're now using DSR), this means 970 v 670 is a 65% upgrade, but only 40% upgrade versus 7970.

So the upgrade you experienced was bigger than it could have been...
 
I also have a 7950 still going strong in my secondary rig. Great card even today.
 
Status
Not open for further replies.
Back
Top