Nintendo has actually been a big on backwards compatibility. The Gameboy Advance could play Gameboy games, the DS could play Gameboy Advance games, the 3DS could play could play DS games, the Wii could play Gamecube games and the Wii U could play Wii games. This was all done in hardware and not through emulation, correct me if I am wrong. Nintendo has more often than not extended backwards compatibility to the prior generation.
So, in summary, they've done both. And both approaches have merits and drawbacks that need to be balanced, depending on market conditions, design goals, available technology and cost.
This is why I cant shake the idea that its not only possible Nintendo will do this, but actually very plausible. Especially if the newer process allows for 3-4x the number of GPU cores. There is no doubt the newer Ampere will outperform the Maxwell flop for flop, but to what extent? Most of the buzz in the gaming world surround ray tracing and would require hardware that they simply wont be squeezing into an SOC that needs to operate on 6 watts in portable mode. Im willing to bet having flawless backwards compatibility will be a requirement from Nintendo and this seems like a simple way to go. What is Nintendo's target performance envelope for Switch 2? Its not going to be eyeing up the PS5/X, there is no mobile hardware that can even get in the same zip code as those consoles. So its really about offering a meaningful upgrade over the current Switch. PS4 level visuals running at 720p portable mode on an OLED screen would be a meaningful upgrade. Nintendo already patented some sort of bespoke DLSS in 2020, my assumption is that is how they plan to improve Docked presentation when being played on 4K screens.
The problem with speculating around Nintendos next step is that they have so many reasonable options. For instance - backwards compatibility or not. And if so, via some (imperfect) emulation, underlying hardware architecture or a mix?
What process, what supplier, what memory subsystem solution, what cost constraints, what power draw...
That said, I tend to agree with your assessment. If Nintendo wants good backwards compatibility, and they move to TSMC 5nm (or something close to it), then going with an (improved) Maxwell architecture would probably be a good solution overall, if moving to the latest Nvidia GPU architecture would create problems. Did I sprinkle enough conditionals in there?
Fundamentally, I expect them to be memory subsystem limited to a large degree. Only Apple has a greater than 64-bit wide memory subsystem on a mobile device, and only on their iPad Pros, so my guess is that Nintendo will stick to a 64-bit LPDDR5(x?) memory subsystem. Which in turn implies that they will try/have to alleviate that bottleneck with greatly improved cacheing. Of course cache carries its own costs in die area and power draw so there is a balance to be struck, but even with sharp cost constraints 5nm still allows for a huge expansion over the TX1. (
Apple A15 die shot and analysis. Note L2 and LLC capacities and die areas.)
Presumably, this scenario would mean no RT and no DLSS, but I can't see that as significant for their product or target demographic. Why would Nintendo customers care exactly what computational scheme is used to calculate their ambient occlusion? And DLSS is arguably not the best fit for a small mobile SoC working over an already uncomfortably thin memory pipe, never mind the installed base of software and existing projects - remember this whole thought experiment hinged on backwards compatibility, so the upscaling method should ideally be functional across the board. The saved die area can be used to either simply reduce cost or to beef up other areas of the SoC.
So - under a given set of assumed constraints and parameters, I'd say that this would make sense for Nintendo and its customers, and preserve investments in software for all involved while providing a reasonable generational leap in performance. But if I changed even one of those conditions, in this case most glaringly the need to stay with a Maxwell(-derived) architecture for backwards compatibility, then the picture changes. Or if they go with, say, Samsung 8nm process. Or...
It's just another thought experiment.