Nintendo GOing Forward.

And then you mix in Nintendo and that removes all those items from the table and even the room the table is in.

Of course, we all know the Nintendo 'template', but this is a unique situation in which the standard 'Nintendo will build a weak console with 5 year-old tech' assumption does not hold up to all the factors involved.

It's foolish to think Nintendo is building a hot rod console, of course it's not, but it is highly reasonable that they may capitalize on leading tech that will be a major efficiency breakthrough and mainstream tech for the next design cycle and beyond, when it will benefit them to do so..

NX has to sell the whole console-portable-mobile ecosystem strategy that Nintendo is devising, it won't be another Wii U-level, old tech rehash dud.

I didn't see anything that made me think NX was 2017. If this were 10 years ago, maybe, but Nintendo have made clear that they are focusing on a smoother transition between gens. Besides Zelda, everything they showed was mostly 1st quarter/early second quarter stuff, unless I'm forgetting something. They don't have enough Wii U software in the pipeline for the machine to last another 2 years.

I saw Zelda U 2016 and Star Fox and Twilight Princes also for next year. More than enough to fill the year. In 2017 Wii U will be $199 and just be clearing out until NX hits in the Fall. Besides, NX portable will be Fall '16 or Spring '17. Gots to space (and stretch) these things out..

I also see...precedent... N64 promised 1995...delayed 1996. GameCube promised 2000...delayed 2001. Both 5 year replacement cycles. Wii U...same story. Nintendo just delayed its mobile game announcement, after all, didn't it?

No it doesn't! How could it possibly confirm either of those two things?

lol, you want cold hard facts?! A little reason, a little inference.., it's a reasonable assumption. Remember S-E already said FFVII Remake was only 'first' on PS4..

And because selling FFVII Amiibo was just destined to be, lol.

Nintendo chosen 45nm in a timeframe in wich 28nm was bleeding edge but available anyway. In a design described as thermal driven...

Right as rain, but that was complicated by Wii backwards compatibility, and what difference would have it made if Wii U was a 20 instead of 30 watt console? The efficiency (and form factor) savings wasn't as big as it is going to be with 14/16nm and HBM. Wii U was only ever going to be a placeholder in a long term strategy anyways.

Amiibo sales are probably compelling enough on their own.

Hmm.., I'm going to agree with that.
 
lol, you want cold hard facts?! A little reason, a little inference.., it's a reasonable assumption. Remember S-E already said FFVII Remake was only 'first' on PS4..

And because selling FFVII Amiibo was just destined to be, lol.

You suggest: Cloud in Smash Bros = FFVII remake on NX = NX is PS4 comparable.
Sorry but I just see Nintendo courting FF fans to sell them their product
I don't go as far as predicting NX capabilities form that.

And FFVII remake "first to on PS4" to me means Xbox and PC to follow because they are the known "players" I can logically count in.
 
Last edited:
lol, you want cold hard facts?! A little reason, a little inference.., it's a reasonable assumption.
As Grall says, not confirmation. It implies, or suggests, or indicates. "Confirmation" requires extremely solid evidence that pretty much everyone can accept the interpretation of as supporting the theory.
 
Don't you guys remembered how nintendo confirmed MGS IV for the Wii by annoucing Snake as a playable character in smash bross brawl back in 2006ish? Oh right, that didn't confirm shit...

This post made me legit spit my coffee out all over my desk.

Thanks milk:LOL::mrgreen:
 
I saw Zelda U 2016 and Star Fox and Twilight Princes also for next year. More than enough to fill the year. In 2017 Wii U will be $199 and just be clearing out until NX hits in the Fall. Besides, NX portable will be Fall '16 or Spring '17. Gots to space (and stretch) these things out..

I also see...precedent... N64 promised 1995...delayed 1996. GameCube promised 2000...delayed 2001. Both 5 year replacement cycles. Wii U...same story. Nintendo just delayed its mobile game announcement, after all, didn't it?

Well, that's all certainly true. I would never put it past Nintendo to delay something. Delaying NX and missing holiday 2016 would be disastrous for their bottom line, however. Remember, Iwata promised big profits for the fy. Could they miss their projected date? They sure could, but I believe they are planning a late 2016 launch as of now.
 
Even though Nintendo has some quality releases for Wii U next year, that's all they have for Wii U next year. Three big releases is a very thin lineup when your not getting any third party games other than a few Lego games and of course some Indie games. I would have to think Nintendo has been gearing up for the NX launch late next year. Retro Studios has been missing in action since releasing Donkey Kong Country TF almost two years ago, so im certain they having transitioned to NX. The real question is what is NX looking to compete with. Is it looking to compete with the X1/PS4 or will it be the first next gen console, kind of like how the Dreamcast was released well ahead of its competition. It would then get the superior multi plats for a couple years, but still be powerful enough to get the multi plats when the next Xbox and PS come out. Or maybe it ends up being more of a 3DS successor that happens to subs as a console as well.
 
Wii U in 2017
I agree with you on timeline, Nintendo big guns titles are once again missing christmas, I could see the NX launch early 2017 in Japan and world wide a quarter of two later.

As for the tech, I would not bet on anything at this point based on their track record. The early showings of their mobile adventure are not reassuring materials either. I would not be left in disbelief if they ship some 28nm hardware. As much as I want them to do it right I would be surprise if they do.
 
I've been thinking about the NX hardware a little bit again, and I'd like to throw some of it out there, if that's cool. Particularly the use of finFETs and if that technology will be feasible for Nintendo to use is something that has interested me.

Looking at the projected price per transistor of finFETs vs 28nm, it's quite clear that 28nm will still offer great savings next year. Simply put, if Nintendo value price and performance, 28nm is the way to go. They can afford more transistors for their money. However, I've come across several quotes from the Nintendo execs which might indicate thaat price/performance are not their priorities.
Iwata said:
Consumers will purchase high quality products even if they are expensive, or in other words, even if there are slightly reasonable discount offers, consumers will not purchase products unless they truly understand and are satisfied with the quality.
http://www.nintendo.co.jp/ir/en/library/events/150217qa/03.html
Miyamoto said:
When we have meetings to discuss these subjects with many people, however, the opinions and the possible conclusions of the meetings tend to move forward in the same direction. Specifically, they tend to revolve around “Which one of these possible technologies is of the highest performance?” and “Which one will eventually be the most affordable?” When younger people start talking in this fashion, people like me make a point of stressing the importance of the product having one very clear-cut unique point.
Nintendo are clearly not averse to using expensive technology if it is harmonious with that particular console's design philosophy.

Further, if Nintendo were to use 28nm and performance was in Xbone/PS4 ballpark, the console would only probably be slightly smaller than the competition. That's never been Nintendo's style. Even this past year, Miyamoto stressed how they like their consoles to be different and how they take into consideration where the console will be placed in the home. It seems more likely to me that Nintendo is planning another small low-TDP console. More quotes:
Takeda's Eulogy at Iwata's funeral said:
You succeeded in planting the seed in employees' hearts that, in order to solve an issue, there is a fundamental cycle whereby you make a hypothesis, execute the plan, see the result and then make adjustments, and by which you have caringly nurtured these seeds to sprout and mature into plants.
http://www.polygon.com/2015/7/17/8996339/satoru-iwata-eulogy-genyo-takeda-nintendo
Iwata believe in making necessary adjustments, not doing complete 180s.
Takeda said:
Of course, the issue of performance was not secondary. Anyone can realize “low performance with low power.” Others tend to aim for “high performance with high power.” With Wii however, Nintendo alone has pursued “high performance with low power consumption.”
http://iwataasks.nintendo.com/interviews/#/wii/wii_console/0/0
Of course, we can laugh about what he means by "high performance," (I would hate to see his idea of low). This philosophy did not change with Wii U. I imagine that today, "low performance" would be something like an Android TV device.

As console size and "being different" has long been a priority at Nintendo, I believe finFET will be used, not for extra horsepower, but for lower TDP. The implications of using finFET will be that Nintendo can afford less transistors, so they will need a smaller chip to offer a price advantage. Something like 8 CUs at 1000 Mhz could possibly allow them to undercut Xbone on price. This would make it something like 2x NVidia Tegra X1, which on finFET, should be doable in a very small box.

Another reason I'm thinking finFET is because I believe that the NX SoC started as a semicustom "Project Skybridge" part. Skybridge was essentially AMD's way of getting into ARM APUs. Semicustom was specifically mentioned as one of the central markets for this technology, of which Nintendo would be an obvious customer. That was a 20nm project. AMD has since stated that formerly 20nm projects have been transitioned to finFET (they had to take something like 33 million dollar write off for it too). Even though Skybridge, with it x86/ARM pin compatibility, has been cancelled, AMD have stated it was because it made more sense to cater to each specific customer's needs.

Another technology that is frequently discussed in relation to NX is HBM. Like finFET, this is a technology which is new and expensive, but it might be Nintendo's style to use it. Takeda has called small pools of efficient RAM a part of Nintendo's DNA. Yet, I would hope that they have heard the developer protests of Xbone's 32 MB being too small for 1080p games (knowing Nintendo, this might be a leap on my part). SRAM in itself is not very efficient compared to Nintendo's beloved eDRAM, so perhaps it is time for a change. Nintendo have shown they are not adverse to MCMs, and if they used HBM ( 1 or 2), they would have several different suppliers capable of integrating the technology.

Obviously, this technology comes at a cost, but it's possible they could realize savings elsewhere. With even one stack of HBM, they could skimp on main RAM, as they are prone to do. So how about this setup:

  • TSMC 16nm/GF 14nm (I anticipate AMD will want to source from both eventually)
  • 8 Core A57 ~2 Ghz (A57 because I'm guessing it started as semi custom Skybridge)
  • 1 TFLOP GCN2 GPU (8 CUs @ 1 Ghz)
  • 2-4 GB of HBM2 @ 256 GB/s (A single 2 or 4-high stack. New next year but prices will decrease over time. Analysis shows that yields are quite good)
  • 4-6 GB DDR4 (another technology which will come down in price)

Looking around as to what is available today, Nintendo could use four 8Gb or 12 Gb DDR4 chips. That should eventually amount to savings over Xbone's 16 DDR3 chips and Sony's 8 GDDR5 chips. They'll never be able to go lower because of the necessary bus width per chip. If Nintendo use for DDR4 chips, even @ 3200 Mhz and a narrow 64-bit bus, that's a useable 25.6 GB/s bandwidth. With the HBM as a framebuffer, that's a similar configuration to many PC setups today which give the current gen consoles a run for their money.

That's basically where I'm at.

TL;DR Nintendo could use state-of-the-art technology in the NX console next year, but they could still end up with performance that will disappoint many.
 
I don't think your post is unreasonable. The one portion I disagree with is the use of HBM. I don't think Nintendo will go with split pools of off-chip RAM. I think it will be LPDDR4 and on-chip embedded RAM.

I don't think Finfets are out of the question. Next year, when most of us expect NX, the market will be full of Finfet products from GPU's to virtually all high tier phones. I don't think its that unreasonable for Nintendo to provide a console chip that's in the ~125-150mm range. Yields should be pretty decent in late 2016.
 
Looking at the projected price per transistor of finFETs vs 28nm, it's quite clear that 28nm will still offer great savings next year. Simply put, if Nintendo value price and performance, 28nm is the way to go. They can afford more transistors for their money. However, I've come across several quotes from the Nintendo execs which might indicate that price/performance are not their priorities.
http://www.nintendo.co.jp/ir/en/library/events/150217qa/03.html
They also speak about "Nintendo like profit which might relate to the Wii / DS lite era.
Looking at Apple SOC the try really hard to be in the 100mm2 size for their SOC using bleeding edge lithography. THey dare to push further for the X version of their SOC. IF NIntendo were to use 14/16nm lithography I would expect to go with a tiny cheap, below 100 mm2 more an Exynos 7420 than a A9. We will see how things go when the first GPU using those lithographies are released but, I would not surprise me if manufacturers are going with relatively tiny chips for a while.
<Back to Nintendo, a tiny chip means constraint on the memory controller, bus size and thus bandwidth, which in turn could force the use of HBM and the associated expenses. Overall I think it is not worth it, there are IPs that lessen the need for high bandwidth solutions. When it comes to delivering a good package it does not come only to process or fancy memory, you need to go with the good IPs. ( Apple A7 and kabini are the same size, same process and the same era for example).I would prefer if they go with with ~130 mm2 of highest performance 28nm process, a 128 bit bus, 3GB of fast DDR4, and a reasonable power budget (35-45 watts) along with the right IPs (aka mobile IP which are really modular) over a tiny slab of 14/16nm silicon, the overhead of HBM, etc. ultimately something has to give.
I think it is interesting that tombraider got released a couple of days ago on the the One and the 360 as looking at the difference in power it shows that a middle of the ground could definitely runs ports of all nowadays games (the PS360 can but the amount of tradeoff is rapidly growing).
 
They also speak about "Nintendo like profit which might relate to the Wii / DS lite era.
Looking at Apple SOC the try really hard to be in the 100mm2 size for their SOC using bleeding edge lithography. THey dare to push further for the X version of their SOC. IF NIntendo were to use 14/16nm lithography I would expect to go with a tiny cheap, below 100 mm2 more an Exynos 7420 than a A9. We will see how things go when the first GPU using those lithographies are released but, I would not surprise me if manufacturers are going with relatively tiny chips for a while.
This is a good point that I hadn't really thought about. Still, if we are only talking about the console, I think that they probably stay in line with their Wii U silicon budget. Wuu (ha) had a GPU around 150mm2 and the CPU was close to 30mm2. 180mm2 is actually about half the size of the SoC in the current gen consoles, so that seems like a fair estimate to me.
<Back to Nintendo, a tiny chip means constraint on the memory controller, bus size and thus bandwidth, which in turn could force the use of HBM and the associated expenses. Overall I think it is not worth it, there are IPs that lessen the need for high bandwidth solutions. When it comes to delivering a good package it does not come only to process or fancy memory, you need to go with the good IPs. ( Apple A7 and kabini are the same size, same process and the same era for example).I would prefer if they go with with ~130 mm2 of highest performance 28nm process, a 128 bit bus, 3GB of fast DDR4, and a reasonable power budget (35-45 watts) along with the right IPs (aka mobile IP which are really modular) over a tiny slab of 14/16nm silicon, the overhead of HBM, etc. ultimately something has to give.
Not a fan of AMD IP I take it? I have no idea how much space and HBM controller would take up on the die, but true, that's also something that needs to be considered.

Anyway, I don't think 130mm2 on 28nm would get them much, but you are talking mobile architectures, which I'm not as versed in. Personal wishes aside, it seems like AMD is pretty much a lock for the home console, but I could very well see them using PowerVR in the handheld.
I think it is interesting that tombraider got released a couple of days ago on the the One and the 360 as looking at the difference in power it shows that a middle of the ground could definitely runs ports of all nowadays games (the PS360 can but the amount of tradeoff is rapidly growing).
Yes, we've definitely seen an increase in scaleability this generation, but the old gen is definitely showing its age. A 1 TFLOP GPU is I think baseline if they are going to support deferred rendering w/ UE4.
I don't think your post is unreasonable. The one portion I disagree with is the use of HBM. I don't think Nintendo will go with split pools of off-chip RAM. I think it will be LPDDR4 and on-chip embedded RAM.

I don't think Finfets are out of the question. Next year, when most of us expect NX, the market will be full of Finfet products from GPU's to virtually all high tier phones. I don't think its that unreasonable for Nintendo to provide a console chip that's in the ~125-150mm range. Yields should be pretty decent in late 2016.
It's true that if Nintendo are making a sub-Xbone level console, HBM might not be worth it. Embedded SRAM would add to their die area, but they wouldn't need HBM w/ its MCM in that scenario. It's hard to say without knowing the cost breakdown for each part. I will say that if they go the embedded route, I would hope they use a 128-bit bus to the main (DDR4) memory, as that pool will then become more vital in rendering.

I think those are the two most likely options, though. Either 1 stack of HBM or 32-64 MB of eSRAM.
 
They could maybe get around the SRAM-amount issue if they made it true cache.

The delta compression in GCN1.2 can help alleviate some of the bandwidth concerns too.
 
Last edited:
This is a good point that I hadn't really thought about. Still, if we are only talking about the console, I think that they probably stay in line with their Wii U silicon budget. Wuu (ha) had a GPU around 150mm2 and the CPU was close to 30mm2. 180mm2 is actually about half the size of the SoC in the current gen consoles, so that seems like a fair estimate to me.
The WiiU was the biggest Nintendo system ever as far as silicon sq.mm are concerned. As a counterbalance it is also their only system that went with pretty outdated process. Now 130mm2 might be greedy using a now widely available process, let say in somewhere in between an A6X and an A5X.(or Cap Verde -BOnaire speaking of AMD GPUs).

Not a fan of AMD IP I take it? I have no idea how much space and HBM controller would take up on the die, but true, that's also something that needs to be considered.
Actually I always leaned slightly toward the challenging side but ultimately I'm a rational AMD IPs are not cutting it at the moment. I absolutely get that they come at a discount compared to Nvidia, or INtel ones but I wonder to which extent they are competitive with mobile IPs within the budget and performance I can see them aim for.
Anyway, I don't think 130mm2 on 28nm would get them much, but you are talking mobile architectures, which I'm not as versed in. Personal wishes aside, it seems like AMD is pretty much a lock for the home console, but I could very well see them using PowerVR in the handheld.
Yes, we've definitely seen an increase in scaleability this generation, but the old gen is definitely showing its age. A 1 TFLOP GPU is I think baseline if they are going to support deferred rendering w/ UE4.
Unreal Engine 4 runs on mobile, they have a things for making huge statement as they benefit from the sustained upgrade cycle in GPUs, etc. But TFLOPS are not even close to be an enabling factor for ports or engine support. Money comes first, then I would put forward CPU and RAM, graphics are hugely scalable. I would point again think of the difference in muscles between the XB1 and the 360 and the last AAA Tomb Raider game. DF speak of slow down in some huge open area or some hub in the game. What is the issue here ? RAM? CPU power GPU power?
I would bet that doubling Xenon L2 and the RAM would go a longer way than spending the same dollars on the Xenos and daughter die.

I agree that ~130 sq.mm is going to buy them much. Kabini is 104 sq.mm iirc, Apple A7 mostly the same thought the later is much better. Give it anywhere close Kabini TDP and it will make a showing. Neither of those two SOCs are using update version of the IPs they are relying onto. The bandwidth available through standard RAM made a big jump thank to DDR4. Neither of those two SOCs were designed as gaming, budget constrained, chips.
The A7 as a SOC powering mobile smart devices includes things that may not be needed in an APU. The A7 and Kabini emphasis single thread performances, other choices can be made in order to make room for more GPU for example. A cluster of Jaguar/Puma and A72 both with 2MB of L2 should be really close in size but there could be other ways (than SMP) to provide a fitting CPU resources through asymmetric Multi processor akin to ARM big-little approach or STI Broadband Engine /Cell. Not as easy but at some point if you want to be both affordable, performant and secure some margins, something has to give.

It's true that if Nintendo are making a sub-Xbone level console, HBM might not be worth it. Embedded SRAM would add to their die area, but they wouldn't need HBM w/ its MCM in that scenario. It's hard to say without knowing the cost breakdown for each part. I will say that if they go the embedded route, I would hope they use a 128-bit bus to the main (DDR4) memory, as that pool will then become more vital in rendering.
I hope they pass on both HBM and scratchpad memory whatever level of performances they are pursuing. It is a crazy expense when Nvidia has it mid end GPUs beating the hell out of 128 bit bus and fast GDDR5. Nvidia low-mid end offering the GTX 750 ti beats often the PS4 and its big and wide memory bus.

I think those are the two most likely options, though. Either 1 stack of HBM or 32-64 MB of eSRAM.
I wish for 128 bit bus to a reasonable amount of either DDR4 or GDDR5.
Overall I suspect Nvidia is too expensive, so the only partners up to the challenge Nintendo is facing is either ARM on its own (cortex CPU and Mali GPU) or ARM and PowerVR. I said it already, I would favor an all ARM design for convenience.
 
Last edited:
That would definitely be an interesting setup. That would entail a decent amount of extra logic, though, wouldn't it? Even if it wasn't cache, I'd imagine they'd need something like the "Move Engines" that Xbone has. Are they basically DMA engines? I'm still a bit shaky as to how DMA actually works...

I'll have to read up on delta compression too.
 
"Move Engines"
That's just a fancy marketing buzzword for a DMA channel, which is old hat in graphics accelerators. :p True, MS bolted some extra functionality onto some of these DMA channels, but it wasn't anything groundbreakingly revolutionary. Case in point, these "move engines" don't allow a sub-PS4 console to perform on PS4 level. :)

DMA channels are basically a set of registers that let you define areas of memory you want to copy to one place to another, or perform some other basic processing on them. For example, Commodore Amiga had a set of three input DMA channels attached to its bit blitter that could read multiple sets of data and perform Boolean logic on them, barrel shift, mask and maybe some other tricks I can't recall anymore before writing out the result to memory again through a fourth DMA channel. Other graphics accelerator blitters undoubtedly work in much the same way (and have way more functionality than the old Agnus chip blitter; possibly able to do scaling, rotation, even texturing or shading in some cases or many other things. Full windows GDI graphics accelerators were probably pretty well-featured back in the day.)
 
Thanks for the explanation, Grall. And this amounts to some CPU cycles being freed up as well, correct?

liolio, I need a bit more time to digest your post before responding. :D
 
Back
Top