Nintendo Switch Tech Speculation discussion

Status
Not open for further replies.
Running into the final 5 months before games go gold, having development kits with only half memory bandwidth would not be optimal. MS and Sony were shipping development kits far closer to final unit performance this close to launch. Unless, of course, 25 GB/s is indeed the final figure.

Nintendo have traditionally been waiting on software rather than hardware to be ready for launch.

Looking back to last year, it appears that cost per transistor was speculated to be cheaper on 20 nm than 16 nm (http://www.eetimes.com/author.asp?doc_id=1326937). For a company willing to sign a long term contract there may be some good deals to be had on an older process such as 20 nm.

For WiiU, Nintendo went with Renesas 45 nm for the GPU when IBM had a newer 32nm edram capable process (which MS utilised for their final 360 shrink) and IBM 45 nm for their CPU when IBM were rolling their own Power processors on 32 nm. Presumably, it was cheaper to Nintendo to shop around for old nodes than go with an all-IBM SoC.
ZOBach_Intel_Litmus02.jpg

You don't think that 16/14nm process node got cheaper after Galaxy S6 launched on 14nm? this report is 16+ months old and will be nearly 2 years old at launch, with the production on 16nm/14nm being far wider while new 10nm process takes hold for big manufactures, I have to assume the deals would all be in favor of the 16/14nm process over the stopgap 20nm process. The prices are so close even over a year ago when Nintendo and Nvidia were really nailing the design down, that I think it would have been the obvious choice for them considering the ability to drop active cooling for X1's target performance, which would drastically decrease the price of the unit. (dollars rather than cents)
 
Another thought I just had, we heard that the final dev kits were shipped to 3rd parties only recently and have been on software dev kits (SDKs like you technically mention, though I assume you meant hardware DEVKITs)
Yeah, devkits.
If it was just X1 chips, why would it take so long? summer is when Pascal Tegra became available right? that timeline makes a whole lot more sense to be perfectly honest.
It's leaving it very late. Final devkits may be same specs, just finalised components for things like Wifi, and software protocols for final hardware compatiblity.

Also if this was Nvidia working with any other partner, I doubt there would even be a discussion about which chip is being used. It's sort of silly IMO, we can't confirm it, but it is likely Pascal, or at the very least not on 20nm which is almost the same thing anyways.
As function says, timelines. PS3 wasn't G80 despite G80 being possible - it got delayed. Who's to say Switch wasn't meant to come out this year but there was a delay?

Considering that if they were targeting just X1 specs, yes Parker would become the obvious choice because you could drop Active cooling, Active cooling being there likely means that not only did they move to Parker, but that they increased the clocks from X1 specs.
Doesn't active cooling point more to X1 to get more juice out of it?
 
I do not remember any credible source claiming the 25GB/s bandwidth. Eurogamer only stated Tegra X1, and correct me if I am wrong, but there is nothing stopping them from using a larger bus to the memory for the development kits. There is also nothing stopping them from clocking the X1 higher than 1Ghz. So it is possible that the development kits have been able to simulate a higher performing final chip seeing as how development kits are free to run the cooling fan full tilt and the case is much larger than the end product. With that said, were aren't talking a night and day difference here compared to the stock Tegra X1.
 
PS3 wasn't G80 despite G80 being possible
PS3 was supposed to not include an nvidia gpu, so they had to produce the only thing available at the moment, while the switch is supposed to have enjoyed a normal development time plus the will to create an halo product to validate an architecture.
 
It's leaving it very late. Final devkits may be same specs, just finalised components for things like Wifi, and software protocols for final hardware compatiblity.
Or, another interesting thought, maybe the dev kits targeted the on the go specs, and only the final devkits had the higher clocks.
As function says, timelines. PS3 wasn't G80 despite G80 being possible - it got delayed. Who's to say Switch wasn't meant to come out this year but there was a delay?
As someone else mentioned, PS3 didn't have a normal development cycle with Nvidia. Switch's delay could just have easily been caused by mass production with Pascal chips, which certainly wouldn't be the case with X1's 2 year old chips.
Doesn't active cooling point more to X1 to get more juice out of it?
No, if you are targeting X1's specs, you would simply produce the chip at 16nm which is now cheaper than 20nm, so that you don't have to have active cooling, even if it is Maxwell. The reason you'd have Active cooling is if you were limited and needed the extra cooling for performance, which considering the option of using 16nm, it simply doesn't make sense to use active cooling with 20nm and now that we know the price of the node 16 months ago? I find that theory very unrealistic. It isn't like NEC is fabbing this chip.
 
Whatever's in the development kits at this point is likely close to the final unit in terms of CPU architecture and arrangement, and memory bandwidth. It's not just the games - Nintendo have an OS and all kinds of background features to get running, stable, and operating within their intended reservations and time slices.

Who's to say Switch wasn't meant to come out this year but there was a delay?

Yeah, Christmas would have seemed like the ideal time for a family and group friendly setup like this to land.
 
Do you have a source on this?
What source are you expecting from this? That smaller nodes produce less heat and need less cooling? or my previous post with the cost of 16nm vs 20nm 16months ago being pennies and 16nm having a wider production line that grew since Galaxy S6 was released. If none of that satisfies you, maybe you could be specific what you want confirmed, because if it is just that producing X1 at 16nm, running 1GHz with passive cooling, I think we can just look to pixel c running X1 at 850 passively in a case half the thickness of NS.
 
Supply may have ramped, but demand is also quite high for the finfet nodes.

Who knows
when we are talking about $0.13 USD price difference when 16nm had low yields and small bandwidth, wouldn't it be safe to assume that the higher yields we enjoy now would have dropped the price below the dead end manufacturing (no more maturing) that 20nm enjoys?
 
What source are you expecting from this? That smaller nodes produce less heat and need less cooling? or my previous post with the cost of 16nm vs 20nm 16months ago being pennies and 16nm having a wider production line that grew since Galaxy S6 was released. If none of that satisfies you, maybe you could be specific what you want confirmed, because if it is just that producing X1 at 16nm, running 1GHz with passive cooling, I think we can just look to pixel c running X1 at 850 passively in a case half the thickness of NS.

An industry source showing cost per transistor projections through to 10 nm - something like that. Most recent one I can find is from last year. It's the same one you posted in response to me actually, after I linked to it, and that's a projection through to 10 nm that no-one is using yet.

Cost (to a customer like Nintendo) is also linked to demand, and demand for 14/16 nm is going up. For Nintendo, with the WiiU, it was actually cheaper to use much old nodes that were not so much in demand any more.

I think Nintendo may well use 16 nm (which does not necessarily mean Parker), but none of what you've posted is proof that cost per transistor is now lower on 16 nm than 20 nm.
 
Excuse me when I start to sound like a broken record, but some of you are completely ignoring the fact that times have changed dramatically since the Wii U was created, performance and market-wise.

I think it pretty safe to say that planning for Wii U started in or even before 2010. At that time the most advanced smart phone probably was the iPhone 3GS. It was still powered by a Samsung SOC. Smart phones were vanity items and kids usually had feature phone or no phone at all. Kids with smart phones were the exception. Smart phone gaming was still in its infancy. Smart phones had about 4% market share and it was mostly Apple and Samsung. 2010 also was the year the iPad as announced and released. While it was revolutionary it was not a gaming device.

Concrete planning for Switch started probably around 2015, maybe a little before. At that time the iPhone 6 was already in the market and it had considerable GPU horse power. Apple had started to roll it's own SOCs as had other companies. Intel had shipped it's first generation of 14 nm CPUs. Some smart phones carried octa-core CPUs. Apple smart phone market share fell. Lot's of new players have appeared. Smart phone adoption had soared among adults and kids. And while kids may not run around with the latest hero phones, a good share of them inherits their parent's hero phone when they get a new one. Smart phone gaming is ubiquitous. Tablets adoption had soared and you could buy a tablet that gave the PS3 and XBox 360 a run for it's money and it's not just tablets from Apple or Samsung.

Anyone spots the difference here? Anyone? You guys really think that Nintendo was completely ignoring this?
 
when we are talking about $0.13 USD price difference when 16nm had low yields and small bandwidth, wouldn't it be safe to assume that the higher yields we enjoy now would have dropped the price below the dead end manufacturing (no more maturing) that 20nm enjoys?

That's a projection intended to allow comparison between nodes, some of which aren't actually in use yet (like 10 nm). 10 nm isn't a viable process yet.

Not sure those graphs take into account yield, which is chip dependent.

20 nm may still be maturing actually, some nodes continue to improve for years.
 
Well can't we stick with at least X1 at best X2 and wait politely nodding at each others from time to time ?

Can I predict a customised offshoot of X1, with Pascal and on 16 nm, please?

(Something that nVidia would have been able to deliver for Xmas this year).
 
PS3 was supposed to not include an nvidia gpu, so they had to produce the only thing available at the moment, while the switch is supposed to have enjoyed a normal development time plus the will to create an halo product to validate an architecture.
Do we know what Nintendo's plans were? What about talk of AMD landing three console wins? Maybe this move is a last minute change too, hence the delay? Many unknowns! Enough that predicting hardware from this point isn't feasible. I think a minimum spec is fairly straightforward, and then we can guess at possible higher specs with different degrees of probability based on personal opinion. ;)
 
Status
Not open for further replies.
Back
Top