"Easier time running docked" could either mean higher clocks or a dGPU in the dock.
To be honest, the best possible solution I'd see for tablet+dock would be something like half a Drive PX2, with the tablet having a TX2 and the dock a GP107 through PCIe 3.0 x4. This would take the whole set to $350 or more, but they could sell just the tablet for ~$250 and the dock separately for another $100 or more.
Developing for this would be a can of worms IMO, mainly the part where games need to switch from 1 GPU to the other seamlessly. I guess the dGPU would have to be completely blocked from doing compute tasks so latency wouldn't bring unexpected bottlenecks.
I did say in the part you originally quoted that nVidia didn't have a competitive 64-bit CPU that they could offer integrated in an SoC with their GPU.
And the CPU core could have not been made by nvidia at the time. There were/are so many options. Original xbox used an Intel CPU and nvidia GPU, Nintendo used IBM CPUs with GPUs from ATi then AMD.
And what if the CPUs weren't 64bit? Can't both the Cortex A15 and A17 address over 4GB? I thought they could.
I can't find where it's specified quad-core Power7 @ 3.2GHz uses < 120W, do you have a source for that?
I didn't specify it was Power7. I wrote they were producing
a 45nm SoC in 2011 using 3.2GHz CPUs that operated under a 120W power supply.
Jaguar cores @ ~1.6-1.75 GHz probably consume < 30W for the 8-core cluster. I doubt quad Power7+ would be anywhere within spitting distance of that figure when scaled down, given how aggressively it's designed for high clocks and high SMT throughput. If it made sense from a power standpoint to offer SKUs with several ~2GHz Power7+ they probably would have.
Or maybe they did offer and both Sony and Microsoft simply figured out the Jaguar cores were a better alternative.
8-core Jaguar is about 55.5mm^2 in XB1 (roughly similar in PS4). That includes the 4MB of L2 cache. Power7+ die is 567mm^2 for 8 cores on IBM's 32nm SOI. If you cut the cores in half and took out the accelerators and other unneeded things (SMP related) it'd still easily be over 200mm^2. And then there'd probably be a decent decrease in density moving everything else to this process.
Which would be one of the reasons why both console makers went with AMD.
AMD has the best possible solution because they had a good enough CPU to offer at that exact time, but this wasn't really because of a consistent technical advantage over nVidia.
So having the best possible solution for a SoC within the required timeframe isn't a technical advantage?
It was due to circumstances that just don't apply today, and don't contribute to a cause for concern that Nintendo is using nVidia in Switch.
Who exactly is concerned that Nintendo is using nvidia in the Switch?
It's certainly not me. I think nvidia taking care of the hardware is the scenario that best avoids a complete disappointment on console's specs.
I don't include MCMs under the basis that they were not on the table for the XB1 or PS4 designs..
Source?
Where exactly did Mark Cerny or Yoshida or Don Mattrick or any other Sony/Microsoft official claim that only SoCs would be on the table? Especially where no home console at the time had ever used a full SoC to date, I find it
very hard that they would impose such a limitation from the start.