Switch 2 Speculation

Who said RT won't be used on Series S? It's baked into the hardware, if it's there then someone's going to use it, be it a MS developer or a 3P developer. In fact it's already in use in select games like DMC5.
You picked the wrong game as an example because DMC5 doesn't have RT in Series S (unless they patch it later).
 
Who said RT won't be used on Series S? It's baked into the hardware, if it's there then someone's going to use it, be it a MS developer or a 3P developer. In fact it's already in use in select games like DMC5.

DMC5 doesn't actually support ray tracing on the Series S according to some sources so the only known game to support ray tracing on the system is Watch Dogs Legion but that's one of the games that feature less demanding rendering pipelines when we see that high-end consoles are running dynamic 4K with a minimum resolution of 1440p.

What games in particular are you referring to? The only developer of late we've heard explicitly touch on difficulty of porting games to Series S is the Control dev, and that is specifically with their title. Even that developer has said that the S is very capable, but particularly with next-gen titles as those can leverage the new RDNA 2 features. If anything, we should see more RT in future Series S titles once 8th-gen cross-gen dev has been more or less left behind.

It's not as easy as saying cross-gen games should "just work" on Series S and feature RT because the games are simpler in art/rendering pipelines because that actually isn't 100% true. A lot of the bigger AAA games from 8th gen at the tail-end have pretty demanding rendering pipelines and complex artstyles, whereas some of the titles we've seen so far targeting 9th-gen systems specifically may not be as big-budget (since they're targeting a smaller install base), and may not be as demanding. There's also the issue of certain game engines needing to be rebuilt in areas to accommodate the 9th-gen systems.

Just as an example from the last generation, the PS4 started out rendering CoD: Ghosts at 1080p with the most recent entry dropping the resolution to a dynamic 1080p. Just to serve another example, AC4 was rendered at 1080p when it debuted on the PS4 but AC Valhalla is slightly above ~900p on the very same system. I can provide several more examples of this phenomenon if you're interested. The best time to show off ray tracing on any system would've been now early in this cycle where rendering pipelines largely built off of the last generation systems have relatively moderate fixed costs ...

Well, I guess we'll have to see. SSR maybe wasn't the best example but there's been plenty of examples of other techniques that were invented and ran through software code on older GPUs that have since seen dedicated silicon to handle them via hardware in newer designs. ML is one such thing; it wasn't that long ago that ML models and programs targeted CPUs and simply leveraged what extended math co-processing features those had. It'd move to GPUs a bit later but even then ML-specific hardware support for things like FP16, INT8 etc. wouldn't make their way into GPU designs for a good long while.

I don't think SSR will ever fully go away; I actually am not sure if PS5 and Series X have enough capability to run full RT and provide 4K (or near 4K) rendering @ 30 FPS, let alone 60. So if we're going to see a mix of RT and SSR even on those systems as the gen wears on, we'll surely see it on Series S and Switch Pro/Switch 2. And developers will be mindful of that at the onset.

SSR will go away in the long run. This trend will be made even more evident whenever the refreshes for the high-end consoles are released. The trend isn't to introduce more hacks into the rendering pipeline to make it harder to work with. The direction is to introduce more general solutions and remove these hacks into the rendering pipeline because there's a strong desire among developers to make it easier to author more content.

I think we need some more, concrete details on Switch Pro/Switch 2 before arriving to some of these conclusions. And we should also remember that AMD and Nvidia's architectures are in a lot of ways pretty different despite having more similarities than, say, either with Intel's Xe or Apple's stuff. We can't just look at the raw TF and go "Series S has over 2x the TF" and assume that's that. Because, we can already look at AMD's and Nvidia's current cards and see while AMD's either compete evenly or beat Nvidia (some heavily) in rasterized tasks, that's oft-times before some of Nvidia's advantages like DLSS and RT via the Tensor cores enter the picture. Currently AMD has no equivalent for those on RDNA 2; their RT is tied to the CUs and Fidelity FX is not hardware-accelerated in the way Tensor Cores are (although in Microsoft's case, their systems support DirectML which is a hardware-based solution in some manner).

For how resolution drops might impact things, I don't see how it's too much a concern as DRS can take care of it, and games leveraging whatever DLSS support Nintendo includes (probably DLSS 2.0) can operate with lowered texture quality and internal resolution which helps with frame times, use DLSS to scale the image up to a desired target resolution (it doesn't have to be 4K for the output); that saves on frame budgets and those savings can be used for any varying degree of RT. It'll just take some smart design choices but for serious devs this shouldn't be an issue after getting acclimated with things.

I agree with you that we'll need to see more concrete details but regardless it's still far too early to attempt a release this soon into the cycle where developers are now currently in the process of a major refactoring their rendering pipelines so chances are very high that any weaker system will get locked out of being able to run these future pipelines since their main target is probably PS5/XSX. They should probably delay their plans for at least another 2 years once better hardware designs are available so that they'll have a bigger window of opportunity to run those pipelines. If they were to pull off a release now, the new silicon would probably be designed on 7nm process which is roughly a little over 2x performance/watt gain against 16nm placing a potential new model at ~1-2 TFlops depending on the frequency. Even if Nvidia did have relatively better RT performance, the new system would still be 4x slower in the worst case scenario so it's highly doubtful that it'll be running any advanced rendering pipelines with ray tracing in the future and DLSS isn't going to change this either when there isn't much of an option left to render at a lower resolution.

A lot of this, again, is predicated on judging Series S somewhat unfairly. The devkits were constantly running behind, the GDK was taking longer to stabilize, MS had to wait on AMD for certain features (some of which still aren't ready), devs needed time (some still need time) to acclimate to GDK over XDK, engines have to be retooled, porting teams may or may not have the required manpower and funds to prioritize optimizations in certain ports, etc. etc.

Features like RT aren't going to be used wholesale even on PS5 and Series X; there will be compromises there. But while you're right about rendering pipelines getting more complex, these systems have the features to accommodate that. I'm not even talking about DLSS here, but other things like Mesh Shaders, VRS (Tier 1 and Tier 2), SFS etc. . Admittedly, very Series-centric things on the console side, but Nvidia's hardware has the same features which means the Switch Pro/Switch 2 will also support them.

We have to look at these systems in the context of the whole of their capabilities, not just a single feature or two or a single metric like raw compute. Everything has to orchestrate together in order to enable absolute peak benefits. This is especially true for systems like Series S and Switch Pro/Switch 2, but once they are, things like RT will be more commonplace.

I might very well be judging the Series S unfairly but prospects look even more grim for less powerful systems than that since just introducing new features alone is a vain attempt at keeping developers on your platform. Ray tracing involves more than just ray traversal or intersection testing but it interacts with compute power as well when we do ray shading too so we can't just solely ignore compute as a factor.There will definitely be compromises but developers will largely have the option to just drop the resolution on both PS5/XSX to use more generic pipelines later on whereas low-end consoles won't be so fortunate since most of them already don't believe it's worth deploying the same solution. I guess time will tell whether they keep avoiding the feature or if they'll actually use it ...
 
Last edited:
I agree with you that we'll need to see more concrete details but regardless it's still far too early to attempt a release this soon into the cycle where developers are now currently in the process of a major refactoring their rendering pipelines so chances are very high that any weaker system will get locked out of being able to run these future pipelines since their main target is probably PS5/XSX. They should probably delay their plans for at least another 2 years once better hardware designs are available so that they'll have a bigger window of opportunity to run those pipelines. If they were to pull off a release now, the new silicon would probably be designed on 7nm process which is roughly a little over 2x performance/watt gain against 16nm placing a potential new model at ~1-2 TFlops depending on the frequency. Even if Nvidia did have relatively better RT performance, the new system would still be 4x slower in the worst case scenario so it's highly doubtful that it'll be running any advanced rendering pipelines with ray tracing in the future and DLSS isn't going to change this either when there isn't much of an option left to render at a lower resolution.



I might very well be judging the Series S unfairly but prospects look even more grim for less powerful systems than that since just introducing new features alone is a vain attempt at keeping developers on your platform. Ray tracing involves more than just ray traversal or intersection testing but it interacts with compute power as well when we do ray shading too so we can't just solely ignore compute as a factor.There will definitely be compromises but developers will largely have the option to just drop the resolution on both PS5/XSX to use more generic pipelines later on whereas low-end consoles won't be so fortunate since most of them already don't believe it's worth deploying the same solution. I guess time will tell whether they keep avoiding the feature or if they'll actually use it ...

Someone else gets it! Or well, gets that Raytracing is A: more a useful tool for artists and the general pipeline to have stuff "just work" and B: that it's just too expensive to deploy on min spec. Meaning it's just too expensive to be all that useful for A.

If you can't run it well on min spec, it doesn't generally get used. Thus for last generation the idea of "If it runs well on the Xbox One, it'll run well everywhere else".

That being said, since it's not the most viable on the S that's also good for ports to a theoretically high powered Switch 2 as well. Because the Series S exists, games that aren't CPU bound and run at 60fps on the bigger consoles might be portable to the Switch 2, the same way Id games were ported to the Switch. Admittedly that wasn't that many, and there's always the option of games running at 60fps on PS5/XSX just running at 30fps on the Series S. In which case that port window might drop quite a bit.

But hells with ports. As with pretty much all Nintendo consoles the best selling games are going to be Nintendo titles. Which makes me interested in what BotW II will look like on this updated console. The first had some great art direction, and serious issues with content limitation and image quality (yay super fps drop in Korok Forest!). Hopefully a Switch+ exclusive title will clear up some of the content limitations, so the world won't look quite as much a barren wasteland everywhere, and the Switch 2 can clean up image quality with higher res/sample counts/actually using anistropic filtering.
 
How a “Switch Pro” leak may point to Nvidia’s megaton mobile-gaming plans
Op-ed: Reading the RTX tea leaves in light of recent Switch-related reports.
- SAM MACHKOVECH 3/26/2021, 5:45 AM
https://arstechnica.com/gaming/2021...point-to-nvidias-megaton-mobile-gaming-plans/

Earlier this week, Bloomberg Japan's report on a rumored Nintendo Switch "Pro" version exploded with a heavy-duty allegation: all those rumors about a "4K" Switch might indeed be true after all.

The latest report on Tuesday teased a vague bump in specs like clock speed and memory, which could make the Switch run better... but jumping all the way to 4K resolution would need a massive bump from the 2016 system's current specs.

What made the report so interesting was that it had a technical answer to that seemingly impossible rendering challenge. Nvidia, Nintendo's exclusive SoC provider for existing Switch models, will remain on board for this refreshed model,

Bloomberg said, and that contribution will include the tantalizing, Nvidia-exclusive "upscaling" technology known as Deep Learning Super Sampling (DLSS).

Since that report went live, I've done some thinking, and I can't shake a certain feeling. Nvidia has a much bigger plan for the future of average users' computing than they've publicly let on.

Edit: it's a pretty long Op-ed, given what has been reported about this Nintendo "Switch Pro", thus far.
 
Last edited:
You picked the wrong game as an example because DMC5 doesn't have RT in Series S (unless they patch it later).

Actually I think you're right; the DMC5 example was for Series X but Series S doesn't support RT AFAIK. My mistake!

DMC5 doesn't actually support ray tracing on the Series S according to some sources so the only known game to support ray tracing on the system is Watch Dogs Legion but that's one of the games that feature less demanding rendering pipelines when we see that high-end consoles are running dynamic 4K with a minimum resolution of 1440p.

Yeah I messed up with mentioning DMC5, that was more a Series X thing. Late-night post, slip-ups with wrong references happen :p

WRT WD Legion, it's actually impressive to some degree Series S had RT for it given the propensity for Ubi's games to not be very well optimized at the onset. Valhalla had a lot of the same sub-optimization issues, especially on Series S and X, at release. While it may not have a rendering pipeline as complex as, say, Demon's Souls Remastered or such, it's still an open-world game.

The fact a somewhat unoptimized open-world game was able to feature RT on Series S at launch is a strong indication for future performance, particularly for more linear-based, non open-world titles, even as complexity of rendering pipelines increases. Again, other features being leveraged concurrently like Mesh Shaders will help plenty in enabling better use of RT.

Just as an example from the last generation, the PS4 started out rendering CoD: Ghosts at 1080p with the most recent entry dropping the resolution to a dynamic 1080p. Just to serve another example, AC4 was rendered at 1080p when it debuted on the PS4 but AC Valhalla is slightly above ~900p on the very same system.

Dynamic resolution isn't really a bad thing IMO, especially with various reconstruction techniques available, which even some 8th-gen games used either via software solutions or via support at the silicon level for stuff like checkerboarding on the PS4 Pro. We're also dealing with a situation where, given your examples, we don't know the duration of time those games actually spend at lower than native resolution. If you have some data on that, it would be nice to read up on, but it's also completely possible those games could run sub-native for 5% or 10% of their playtime.

Still enough to require use of dynamic resolution but if the lower bounds on the sub-native resolutions aren't stark and don't last for that long persistently, it won't take away from the experience. Another thing to keep in mind is that even with those examples for Ghosts and Valhalla, while the resolutions may run sub-native at points, the framerates still hold the same as earlier entries. It's pretty obvious that in these same games, they could have kept 1080p if they wanted, if they wanted to lower framerate stability some. They chose not to because the lesser impact was in applying dynamic resolution and keeping framerates high relative to early entries, and the resolutions don't drop drastically lower than native even with dynamic resolution active. Even for what ranges they do drop, there are reconstruction techniques available that lessen that impact to the player as they're playing.

SSR will go away in the long run. This trend will be made even more evident whenever the refreshes for the high-end consoles are released. The trend isn't to introduce more hacks into the rendering pipeline to make it harder to work with.

Okay fair enough, let's say it does. Let's say it's not around in five years to provide a less-intensive backup to RT that can be used in tandem with it. That doesn't mean RT is suddenly off-limits for hardware like the Series S or Switch Pro/Switch 2. For one, RT can be applied at various levels of accuracy/quality. Secondly, as developers get acclimated to the various new techniques they can use in future rendering pipelines, they'll find ways to lower processing strain to improve budgets in the frame times to squeeze out more RT performance.

There was little about PS4 and XBO that necessitated gaining more mastery of various hardware features as the generation went on, so most of the visual improvements came from the general increases in budget for AAA titles. PS5 and Series systems have a bit more in common with older system like PS1, Saturn, PS2, PS3, etc. What I mean by that is, PS5 and Series have hardware features that developers will have to come to grips with over the next few years and learn to master use of, and doing so will help their games improve in performance alongside the usual ramping up of bigger budgets and resources for AAA games.

I agree with you that we'll need to see more concrete details but regardless it's still far too early to attempt a release this soon into the cycle where developers are now currently in the process of a major refactoring their rendering pipelines so chances are very high that any weaker system will get locked out of being able to run these future pipelines since their main target is probably PS5/XSX.

Can't agree on this perspective, because developers will already have to account for PC configurations weaker than PS5 and Series X. In fact, they already are. As long as Series S and Switch Pro/Switch 2 are within the ballpark of some of those configurations (and keeping in mind a lot of the lower PC configs are GPUs that don't even have hardware acceleration for RT or ML, Series S and Switch Pro/Switch 2 can afford to hit notably below those in raw TF but offer similar performance once their wider features are leveraged)

And in the case of games that still can't run on those systems or can't offer some things like RT, that's likely where streaming can come into play.

They should probably delay their plans for at least another 2 years once better hardware designs are available so that they'll have a bigger window of opportunity to run those pipelines. If they were to pull off a release now, the new silicon would probably be designed on 7nm process which is roughly a little over 2x performance/watt gain against 16nm placing a potential new model at ~1-2 TFlops depending on the frequency.

Honestly I don't think Nintendo can afford to rest on their laurels and wait on a Switch Pro for another two years. Tech progresses quickly and Nintendo can't predict when demand for Switch will decline, perhaps sharply so. They don't want another Wii situation on their hand where after a few years sales basically just died as other markets like mobile gaming ate into their lunch.

I still think you're looking at it too much from a purely TFs POV, because GPU architectures are pretty flexible and scalable, and console designs tend to customize many things. In other words you don't need X number more TFs in order to have X number more pixel fillrate, or RT capability. Nintendo's got some smart engineers in their R&D labs and I'm sure they'll do some customizations to whatever base tech they leverage from Nvidia to ensure things fit with the new Switch, and I expect them to do a lot more on that note than they did with the original Switch since they now have a product they know is a successful brand, justifying the additional R&D and costs.

I might very well be judging the Series S unfairly but prospects look even more grim for less powerful systems than that since just introducing new features alone is a vain attempt at keeping developers on your platform. Ray tracing involves more than just ray traversal or intersection testing but it interacts with compute power as well when we do ray shading too so we can't just solely ignore compute as a factor.There will definitely be compromises but developers will largely have the option to just drop the resolution on both PS5/XSX to use more generic pipelines later on whereas low-end consoles won't be so fortunate since most of them already don't believe it's worth deploying the same solution. I guess time will tell whether they keep avoiding the feature or if they'll actually use it ...

This is probably an instance where you're more a pessimist on the matter while I'm more an optimist x3. Within a couple of years we should start seeing how this really pans out regards RT on lower-power devices like Series S and Switch Pro/Switch 2, but I'm fairly confident it will be a feature used on them, even if at lowered quality.

We've seen devs pull off seemingly impossible graphical techniques on supposedly weak/weaker hardware in generations past, Series S and Switch Pro/Switch 2 won't be an exception to that.
 
https://arstechnica.com/gaming/2021...point-to-nvidias-megaton-mobile-gaming-plans/











Edit: it's a pretty long Op-ed, given what has been reported about this Nintendo "Switch Pro", thus far.

Looking like a 720p oled screen that will render natively and then 4k DLSS upscale when docked. The question is do they do 720p and higher framerates to 4k or 900p to 4k with lower frame rates or do they let the dev decide.

Anyway if the 720p screen is true then yuck.

My guess is an 6 cpu core apu with whatever nvidia design fits for the gpu. 8 gigs of ram and a 128 gigs of internal storage . 720p screen , same physical device size to keep joycon compatibility "support" for faster micro sd cards and perhaps a spec bump on the mobile carts they use to allow them to read faster.

It could be a great upgrade for a lot of people. To me it will come down to cost and there are most likely other portables next year I would rather have
 
I'm a bit confused about DLSS being useful for older titles if it comes to the Switch 2. Doesn't DLSS require TAA and a patch?
 
The fact a somewhat unoptimized open-world game was able to feature RT on Series S at launch is a strong indication for future performance, particularly for more linear-based, non open-world titles, even as complexity of rendering pipelines increases. Again, other features being leveraged concurrently like Mesh Shaders will help plenty in enabling better use of RT.

Mesh shaders are completely orthogonal to ray tracing and aren't related in any way so I don't see how one will help the other ...

Still enough to require use of dynamic resolution but if the lower bounds on the sub-native resolutions aren't stark and don't last for that long persistently, it won't take away from the experience. Another thing to keep in mind is that even with those examples for Ghosts and Valhalla, while the resolutions may run sub-native at points, the framerates still hold the same as earlier entries. It's pretty obvious that in these same games, they could have kept 1080p if they wanted, if they wanted to lower framerate stability some. They chose not to because the lesser impact was in applying dynamic resolution and keeping framerates high relative to early entries, and the resolutions don't drop drastically lower than native even with dynamic resolution active. Even for what ranges they do drop, there are reconstruction techniques available that lessen that impact to the player as they're playing.

The examples I gave out rarely ever hit their maximum resolution if you looked further into them. Other examples include Watch Dogs 2/Watch Dogs Legion, CoD 2/CoD BO2, RotTR/SotTR & etc just to show that resolution drops happening overtime is a very real trend ...

Okay fair enough, let's say it does. Let's say it's not around in five years to provide a less-intensive backup to RT that can be used in tandem with it. That doesn't mean RT is suddenly off-limits for hardware like the Series S or Switch Pro/Switch 2. For one, RT can be applied at various levels of accuracy/quality. Secondly, as developers get acclimated to the various new techniques they can use in future rendering pipelines, they'll find ways to lower processing strain to improve budgets in the frame times to squeeze out more RT performance.

I have a counter example to this optimism. The Switch supported tessellation and even featured Nvidia's superior implementation for the longest time but virtually no games over there uses it compared to the other consoles!

Can't agree on this perspective, because developers will already have to account for PC configurations weaker than PS5 and Series X. In fact, they already are. As long as Series S and Switch Pro/Switch 2 are within the ballpark of some of those configurations (and keeping in mind a lot of the lower PC configs are GPUs that don't even have hardware acceleration for RT or ML, Series S and Switch Pro/Switch 2 can afford to hit notably below those in raw TF but offer similar performance once their wider features are leveraged)

And in the case of games that still can't run on those systems or can't offer some things like RT, that's likely where streaming can come into play.

Yes but that doesn't mean they'll all be running on the same art/rendering pipeline as time goes on. Developers are just going to have to maintain 2 different pipelines ...

Honestly I don't think Nintendo can afford to rest on their laurels and wait on a Switch Pro for another two years. Tech progresses quickly and Nintendo can't predict when demand for Switch will decline, perhaps sharply so. They don't want another Wii situation on their hand where after a few years sales basically just died as other markets like mobile gaming ate into their lunch.

I still think you're looking at it too much from a purely TFs POV, because GPU architectures are pretty flexible and scalable, and console designs tend to customize many things. In other words you don't need X number more TFs in order to have X number more pixel fillrate, or RT capability. Nintendo's got some smart engineers in their R&D labs and I'm sure they'll do some customizations to whatever base tech they leverage from Nvidia to ensure things fit with the new Switch, and I expect them to do a lot more on that note than they did with the original Switch since they now have a product they know is a successful brand, justifying the additional R&D and costs.

Considering how their most recent platform was initially manufactured with a lead (20nm/16nm) in transistor technology compared to their competitors (28nm), this patience somewhat payed off seeing as how developers were able to port some of their games to the system so it'd be shortsighted of them to discard such an advantage that helped in portability. If they don't want to wait and they have alternative plans then don't expect to see other developers offer much more support than they will for the rest of the lifespans that previous consoles will have since they'll all be in a similar range in terms of capability. The Switch generally runs anywhere between 2-3x slower compared to the PS4 when were looking at multiplatform games so releasing a new system very soon means that they'll be generally at par with last generation consoles ...

Consoles are practically now closer than ever before to PC hardware so I don't see how customizations will be a "one way" benefit to a specific platform ...
 
That's what I thought. "4k, but only if a game is coded for DLSS or older games are patched, if they are suitable for a patch" isn't a snappy proposition.

Think a lot of people will just care about new games at that point. The newer hardware should still be able to run the older games faster.
 
That's what I thought. "4k, but only if a game is coded for DLSS or older games are patched, if they are suitable for a patch" isn't a snappy proposition.
The reality of the PC market is that not only isn't DLSS an option for already published titles, but the overwhelming majority of new titles don't support it either. It has been confined to a small number of titles from the usual suspects with close ties to Nvidia.
Hence my question in another thread - if you didn't have tensor cores around for non-gaming reasons just taking up die space if you didn't figure out something to do with them, would you really use that particular setup to do upscaling? It seems to me that you would either do lower quality upscaling when needed using processing elements that are already present and improves the performance across the board if not used for upscaling. Or you would implement a dedicated hardware block that does nothing but upscaling, limited to a single algorithm and even limited scaling options, and optimise the hell out of the hardware design to reduce die area. And it would work on final output.
It wouldn't be quite as good as DLSS, but it would still be a damn sight better than bilinear, would be cheaper, and help the platform much more generally. At the end of the day, small pixel level differences don't really matter much at the edge of our ability to resolve.
 
DLSS makes far more sense on a console than it does in the PC market where the developer knows its a limited market of consumers that will have the hardware required to utilize it. Especially if Nvidia is offering Nintendo an off the shelf processor where those tensor cores are already implemented. For the market as a whole, DLSS isnt going to be massively successful, but could it be a nice perk from Nvidia for Nintendo's Switch successor? I think it could be. There is no way a mobile chip is going to be offering true 4k rendering for AAA games anytime soon. Its going to take tricks and workarounds to improve image quality when blowing it up on the large 4k TV's.
 
At this point guys, I am curious, what do you think just the likelihood of a more powerful Switch coming out anytime in 2022 or sooner (i.e.by the end of this year or next March) powered by a new SoC (vs Tegra X1/Markio that has both DLSS (using Tensor cores), a GPU with more than 256 cuda cores--at least 384 if not 512...(I think that would mean 6 to 8 SMs ?). Also, more than 4GB RAM (6-8 GB) as well as better/newer/faster CPU (newer and/or more and/or higher clocked ARM cores),
Finally, will this next version of Nintendo Switch have a memory bus/LPDDR* combination resulting in bandwidth of between 51-102 GB/sec, with -50 GB/s bandwidth as the basement expectation.

priced at anywhere between $299 and $399 USD.


50/50 chance?

10%, 25%, 70% ?
 
With all the talk about a logjam at fabs and continuing PS5/XSX shortages, maybe a new gaming device launch this year isn't in the cards.


It would be complicated indeed.

Or maybe nVidia/nintendo are stock piling for months to build stock but I doubt it, it would have leak already...
 
At this point guys, I am curious, what do you think just the likelihood of a more powerful Switch coming out anytime in 2022 or sooner (i.e.by the end of this year or next March) powered by a new SoC (vs Tegra X1/Markio that has both DLSS (using Tensor cores), a GPU with more than 256 cuda cores--at least 384 if not 512...(I think that would mean 6 to 8 SMs ?). Also, more than 4GB RAM (6-8 GB) as well as better/newer/faster CPU (newer and/or more and/or higher clocked ARM cores),
Finally, will this next version of Nintendo Switch have a memory bus/LPDDR* combination resulting in bandwidth of between 51-102 GB/sec, with -50 GB/s bandwidth as the basement expectation.

priced at anywhere between $299 and $399 USD.


50/50 chance?

10%, 25%, 70% ?


Well over 70% for 2021, if we assume Takashi Moshizuki likes his job.
No one knows what this new SoC is bringing, but since it's Nintendo we're talking about we should lower our expectation down to the minimum we can think of, and then lower those by half.



With all the talk about a logjam at fabs and continuing PS5/XSX shortages, maybe a new gaming device launch this year isn't in the cards.
AFAIK shortages are only critical for N5 (that is fully reserved by Apple and Qualcomm anyway) and TSMC's N7. The new SoC doesn't have to use 7nm.
For all we know it could be using TSMC's 12FFN which is already optimized by/for Nvidia SoCs and I bet those production lines aren't limited, as Nvidia transitions its GPU chips towards 8nm on Ampere. It could also use Samsung 10nm, for example.

We can safely assume Nintendo won't be ordering chips to go into high performance + subsidized consoles, nor >$1000 smartphones, so they don't need a high-end process either. Even their current TX1 Mariko was a very late shrink to 16nm back in 2019 (we've had 14/16FF GPUs since 2016).
 
Also they said low-end commodity chips are also hit hard.

So it affects automobile production. Not just high end EVs with huge fancy displays but more boring cars needing chips for running basic entertainment systems.
 
Back
Top