Again, I didn't say it would be better or cheaper, just that it would be possible. AMD won the designs because they provided the best possible solution in the eyes of their contractors, not because there was no other way (as in, if AMD had gone bankrupt in 2010 we'd have no PS4 or Xbone). Credit should be given where credit is due IMO.
I have no idea how much a power-optimized quad-core Power7+ at e.g. 2GHz would consume but it could certainly fit a console if needed. In 2010 IBM was making a 45nm SoC with a 3.2GHz CPU consuming less than 120W.
I spoke of MCMs simply because it's a form of integration that has been used in consoles many times, IBM designs included.
I did say in the part you originally quoted that nVidia didn't have a
competitive 64-bit CPU that they could offer integrated in an SoC with their GPU.
I can't find where it's specified quad-core Power7 @ 3.2GHz uses < 120W, do you have a source for that? It's hard to find fine grained information, I just know that using IBM's energy estimator you can't build a minimal system (with just quad-core CPU + 8GB RAM + backplane + 1gbit ethernet) on Power7 at under 300W.
Jaguar cores @ ~1.6-1.75 GHz probably consume < 30W for the 8-core cluster. I doubt quad Power7+ would be anywhere within spitting distance of that figure when scaled down, given how aggressively it's designed for high clocks and high SMT throughput. If it made sense from a power standpoint to offer SKUs with several ~2GHz Power7+ they probably would have.
8-core Jaguar is about 55.5mm^2 in XB1 (roughly similar in PS4). That includes the 4MB of L2 cache. Power7+ die is 567mm^2 for 8 cores on IBM's 32nm SOI. If you cut the cores in half and took out the accelerators and other unneeded things (SMP related) it'd still easily be over 200mm^2. And then there'd probably be a decent decrease in density moving everything else to this process.
AMD has the best possible solution because they had a good enough CPU to offer at that exact time, but this wasn't really because of a consistent technical advantage over nVidia. It was due to circumstances that just don't apply today, and don't contribute to a cause for concern that Nintendo is using nVidia in Switch.
I don't include MCMs under the basis that they were not on the table for the XB1 or PS4 designs.. and the arrangement in the Wii U is really bad.. slapping a tiny ~30mm^2 cluster of an ancient CPU next to a different main die on a different process. It would have been a big improvement just integrating that CPU die into the main die but that must have not been too trivial, as I imagine it wouldn't have been for nVidia and Sony/MS/Nintendo to share a die with IBM. While the integrated XBox 360 CPU+GPU die is a counterpoint that was an optimization of a design that was several years old, not using anything resembling recent designs, so they had a lot of time to work out and negotiate the issues surrounding this.