Nvidia has them beat badly in power consumption right now. Unless something changes in terms of process improvements, I'm starting to wonder if one of them makes the switch. I guess the difference is AMD can supply an x86 APU where Nvidia cannot. AMD needs to make some huge improvements in Navi for power consumption, either through architecture or process, otherwise it starts to look tough to put anything significant in a console with a power budget.
I am increasingly of a similar mindset, where perhaps one or both platforms may have considered the idea. Perhaps if they were smart, they'd have gotten in writing some contractual exemptions for the IP issues that bogged down emulation or backwards compatibility in the past.
It's for those various forms of lock-in and perhaps AMD being more willing to customize hardware, provide hardware details, or offer low prices that could leave it as a vendor.
If it were just in terms of architectural and implementation merit, I would think Nvidia would deserve serious consideration.
AMD's leveraged GCN to the point that Navi being able to reverse things would be a major discontinuity in AMD's architectural evolution. It would need to be so improved, different, and more complex than the last 3 or so GCN iterations that I'd worry that would be its own major risk factor.
When we talk about power inefficiencies plaguing AMD, aren't we just talking about Vega (which seemed to prioritize higher clocks to achieve improved performance)? Prior to Vega, hasn't AMD's GCN-based GPU compared favorably with NV in terms of perf./W?
GCN's first gen was the most competitive, although it was back and forth in terms of perf/W and perf/mm2 depending on which product. Tahiti had a seriously oversized GDDR5 interface that Nvidia eventually out-designed. Pitcairn was perhaps the most area and power-efficient in that generation, and it would be dragged on multiple generations while Nvidia eventually out-designed and out-iterated it.
Hawaii was perhaps the last headlining GCN GPU, although it was plagued by implementation issues and besides peak performance was rather bloated in how efficiently it arrived at its performance. The contemporaneous Bonaire was okay--ish? GCN's more flexible compute architecture managed to hit a pain point in Nvidia's architecture--which was able to do a fair number of the things GCN could but poorly if simultaneously running compute/graphics domains.
Past that is scads of rebrands, various APUs, Tonga, Fury, Polaris, Vega.
Kepler got Nvidia out of the power crunch Fermi had. Maxwell and later have begun lapping AMD. AMD's supposed strong points in physical implementation and power management have had some baffling missteps and unforced errors. I have historically given the edge on dynamic power management and safeguards to AMD, given some of Nvidia's errors that have led to burned-out GPUs. AMD's just gotten worse along a majority of that problem space outside of a few places, however.
Volta seems to be introducing some novel improvements to the GPU compute model, including fixing some glaring flaws that have plagued GPGPU since forever. On that alone, I'd like to see a console vendor try it out.
Nvidia is giving more detailed information on MCM and interconnect proposals, and giving a time frame that seems suspiciously nearer-term than some of AMD's.
I wonder what AMD's price would be to build a custom chip with an Nvidia GPU, if not having Zen is a deal-breaker. I'm not sure that it necessarily would be in the future.
PS5 will most likely be based on Navi, which from the sounds of things should be a step change in architecture from Vega. So are we really expecting Navi to be plagued with the same issues as Vega?
We still don't really know what Navi is supposed to bring. I suppose the hope is that some of the teething pains we see now could mean more resources are being allocated to the next generations, sort of like how the Bulldozer line started coasting in the years Zen was being developed.
Some of the hoped-for changes like MCMs or interposer-based multi-chip GPUs would seem more probable if AMD demonstrates progress in various aspects that really get expensive fast in terms of power, cost, and area. Infinity fabric and the physical links AMD has demonstrated so far are not adequate for what fans dream Navi will do, and AMD's future proposals are a bit light on details. Being able to put 1.5 or 2x the silicon on a package is not going to help if a huge chunk is taken up with IF controllers and PHY, and not if the silicon's perf/W and perf/mm2 don't vastly improve.