Nintendo Switch Tech Speculation discussion

Status
Not open for further replies.
That doesn't include the shared cache mind you.
You're right. Your ~4.6mm^2 number should be more accurate as to how much they would actually save by stripping off the cluster.
Regardless, 4.6mm^2 is probably still not enough to create a whole new chip.
But the added amount from the A53 cluster + 600MPix/s ISP + 4K video codecs + PCI Express + other I/Os could justify that.

What nodes are those?
That picture comes from a study made by Linley Group, and it seems they only consider two chips with A53 cores, the Exynos 8890 and the Kirin 955:

xgyE55.png


I think the Exynos is made using Samsung's 14FF and the Kirin uses TSMC 16FF+.
Either FinFet process has a similar per-transistor area to TSMC's 20nm, though.
 
Regardless, 4.6mm^2 is probably still not enough to create a whole new chip.
Indeed. I wonder if they might consider hanging a single A53 off of the fabric there just for the OS if they get around to full heterogeneous operation.

...or boost L2 cache for the GPU, though I'm not sure how much they'd be able to do on there though.
If we believe the video capabilities from the EG articles, they've already limited the output to 1080p60/4K30, so they might as well limit the encode/decode block though I'm not sure if it's that much area savings.Probably don't need the imaging block for camera unless I've missed something.

hm...
 
Wouldn't licensing cost of the ARM A57 cores be a lot cheaper than the newer A72?

Cheaper? Maybe.
A lot cheaper? I'd say probably not.

While ARM probably does want some product differentiation to be able to ask more money for the latest and greatest cores, I think this probably happens between their big and LITTLE cores, not between old big and new big.
It might not be in their best interest to have clients presenting new products with old core IPs, since that would make their architecture look bad.
For nvidia they already paid for the A57 license and that's probably the main reason why they didn't go with A72 in Parker. I reckon it could also make the Denver cores look useless and there might have been some pressure to include their in-house IP in the new SoCs.
 
With the release of the "new" Shield TV without any mention of a new SoC, the possibility of a TX1 on Switch sounds more and more reasonable. However, that is hard to couple with information that Dark Souls 3 and the next Assassins Creed is coming to Switch. It does not stand to reason that Dark Souls 3 would be easily ported to it and run well. Otherwise that API NVidia developed must have some magic dust on it.
 
The idea that 16nm is more expensive does need to also take into account that Nintendo could use a much smaller battery and have thermal room for clock upgrades later on if they need them.

If you are saving a few dollars per chip to use 28nm over 16nm, are you just going to turn around and spend that money on a bigger battery?

Shield TV 2017's chip size will probably have the answer, since Nvidia could probably piggy back on Nintendo's orders, creating a larger yield for Nvidia than any shield before it.
 
The idea that 16nm is more expensive does need to also take into account that Nintendo could use a much smaller battery and have thermal room for clock upgrades later on if they need them.

If you are saving a few dollars per chip to use 28nm over 16nm, are you just going to turn around and spend that money on a bigger battery?

No, you're just going to downclock the SoC to ridiculously low values so you can have decent battery life with a crappy small battery. Or at least that's what the 2 SM @300MHz would point to.

Besides, Nintendo did use 45nm in 2012 for the Wii U..
45nm had been introduced in 2008.


Shield TV 2017's chip size will probably have the answer, since Nvidia could probably piggy back on Nintendo's orders, creating a larger yield for Nvidia than any shield before it.
It's reportedly the exact same chip on the new Shield TV.
It may even be the same main PCB and the difference from the old model is the case and little else.
 
No, you're just going to downclock the SoC to ridiculously low values so you can have decent battery life with a crappy small battery. Or at least that's what the 2 SM @300MHz would point to.

Besides, Nintendo did use 45nm in 2012 for the Wii U..
45nm had been introduced in 2008.



It's reportedly the exact same chip on the new Shield TV.
It may even be the same main PCB and the difference from the old model is the case and little else.

16nm with those clocks would still mean a much smaller battery, could be the difference between 3300mah and 2200mah battery for instance, that is going to be a difference in price on the battery of a few dollars like I'm saying.

Also 45nm was the best nec could do with their Embedded memory for wii u, which is why the gpu was 45nm.
 
16nm with those clocks would still mean a much smaller battery, could be the difference between 3300mah and 2200mah battery for instance, that is going to be a difference in price on the battery of a few dollars like I'm saying.
Yes, but if nvidia just sold them a warehouse of TX1 chips at a super discounted price then Nintendo had no say in the process.

Also 45nm was the best nec could do with their Embedded memory for wii u, which is why the gpu was 45nm.
GPU was 40nm. Regardless, NEC's very old process limitations didn't stop Nintendo from going elsewhere and that's still a very telling decision nonetheless.
 
The volume of the battery will be unaffected by the process node, you may get a longer run time for a given volume of charge but the odds we're talking a 50% boost seems unlikely. Happy to be corrected but the numbers seem more like 20% at the outside for a given node shrink (at least from my exp of Intel 'tocks')
 
Is it realistic for Nvidia to have millions of TX1 chips sitting in a warehouse? I cant imagine that they would put such a large volume of chips into production without having contracts purchasing large volumes of chips. I'm sure they are sitting on some thanks to their Shield products, and perhaps a reason we are seeing the Shield TV seeing a reboot, but we would have to be talking about 10+ million chips in a warehouse for it to really make sense. Over at Gaf someone had lifetime NPD numbers for Shield TV sitting around 30k, very low. Its not unreasonable to think Nvidia has far greater sales expectations than that.
 
Is it realistic for Nvidia to have millions of TX1 chips sitting in a warehouse? I cant imagine that they would put such a large volume of chips into production without having contracts purchasing large volumes of chips. I'm sure they are sitting on some thanks to their Shield products, and perhaps a reason we are seeing the Shield TV seeing a reboot, but we would have to be talking about 10+ million chips in a warehouse for it to really make sense. Over at Gaf someone had lifetime NPD numbers for Shield TV sitting around 30k, very low. Its not unreasonable to think Nvidia has far greater sales expectations than that.

Did they ever try to pitch TX1 for the Android-Auto market? If so, that's potentially huge market where I could see Nvidia purchasing a lot of wafers expecting to land a deal there, but then something fell threw or never launched.
 
Is it realistic for Nvidia to have millions of TX1 chips sitting in a warehouse? I cant imagine that they would put such a large volume of chips into production without having contracts purchasing large volumes of chips. I'm sure they are sitting on some thanks to their Shield products, and perhaps a reason we are seeing the Shield TV seeing a reboot, but we would have to be talking about 10+ million chips in a warehouse for it to really make sense. Over at Gaf someone had lifetime NPD numbers for Shield TV sitting around 30k, very low. Its not unreasonable to think Nvidia has far greater sales expectations than that.

Not Shield TV, but maybe they were counting on the Pixel C being a respectable competitor to the ipad pro and Surface pro.
Which it wasn't. Not for nvidia's fault, but for Google's abysmal support for tablets on Android.
 
I was using "sitting in a warehouse" in a metaphorical sense, NV has sunk a lot of money into their mobility line so if it is a fairly simple modification of the TX1 IP I'd regard them as using that to try and recoup some or possibly all of the investment in entering the CPU design business. If they can sell a lot of TX1 they can show shareholders that it wasn't a wasted investment and it means that the next design doesn't have to pay for two whole R&D cycles, just it's own.
 
One of the reasons why I was so convinced that nvidia wasn't going to be the SoC supplier for the Switch is because they have never hinted at custom design wins during their shareholder reports.

Slapping a sticker into the TX1 and selling it to Nintendo to put on the Switch would explain that. There's no custom in the design, just a regular Tegra being sold to a 3rd party.
 
I was using "sitting in a warehouse" in a metaphorical sense, NV has sunk a lot of money into their mobility line so if it is a fairly simple modification of the TX1 IP I'd regard them as using that to try and recoup some or possibly all of the investment in entering the CPU design business. If they can sell a lot of TX1 they can show shareholders that it wasn't a wasted investment and it means that the next design doesn't have to pay for two whole R&D cycles, just it's own.

I think a real "Tegra" win is more relevant that it simply being an X1. The Tegra lineup has seen limited success since its inception. A custom Tegra for the Nintendo Switch, with potential to sell tens of millions of units is a major win regardless if its customized or not.

I have to say, if it does end up being a stock X1, there will be some negative press for Nvidia. They have two statements that would be misleading, "custom" and their comment about the same architecture as their top performing graphics cards.
 
Last edited:
One of the reasons why I was so convinced that nvidia wasn't going to be the SoC supplier for the Switch is because they have never hinted at custom design wins during their shareholder reports.

Slapping a sticker into the TX1 and selling it to Nintendo to put on the Switch would explain that. There's no custom in the design, just a regular Tegra being sold to a 3rd party.

If so, that would also save on R&D, shorten development time on hardware that Nintendo desperately need and allow developers from an early point to be honing software for final release...
 
I have to say, if it does end up being a stock X1, there will be some negative press for Nvidia. They have two statements that would be misleading, "custom" and their comment about the same architecture as their top performing graphics cards.

Not to mention the "500 man-years" statement to create what seems to be a software stack on top of Linux or FreeBSD plus dev tools:
But creating a device so fun required some serious engineering. The development encompassed 500 man-years of effort across every facet of creating a new gaming platform: algorithms, computer architecture, system design, system software, APIs, game engines and peripherals. They all had to be rethought and redesigned for Nintendo to deliver the best experience for gamers, whether they’re in the living room or on the move.



If so, that would also save on R&D, shorten development time on hardware that Nintendo desperately need and allow developers from an early point to be honing software for final release...
Then why release only in March 2017 and miss 2016's holiday season?
Apart from Mario Maker, Nintendo hasn't made any major release in over 3 years, which was Mario 3D World in late 2013.
 
Status
Not open for further replies.
Back
Top