Nintendo announce: Nintendo NX

Status
Not open for further replies.
Wiizard
HBM2 isn't much of a worry either since the size of the interposer would be small, you could get away with using only a single stack (256GB/s, max 8GB), and could avoid driving any off-interposer DRAM at all.

hm... actually, I have to wonder if it'd be better use of the interposer area to have two stacks e.g. take the Fiji turtle layout and chop the sorry ninja in half from head to butt so you have an arm and a leg on one side of the chip.

UJYioIc.jpg


----

I suppose HBM is designed for 1024-bit width per stack, but they can also downclock significantly according to what the APU demands.

-----

2017 would probably be inexpensive enough for the likes of Nintendo.
 
Wiizard


hm... actually, I have to wonder if it'd be better use of the interposer area to have two stacks e.g. take the Fiji turtle layout and chop the sorry ninja in half from head to butt so you have an arm and a leg on one side of the chip.

UJYioIc.jpg


----

I suppose HBM is designed for 1024-bit width per stack, but they can also downclock significantly according to what the APU demands.

-----

2017 would probably be inexpensive enough for the likes of Nintendo.
I suppose you could. The benefit of using a single stack, (a quartered turtle :D), is that you only need half the amount of interconnections, and half the SoC die area devoted to driving this I/O.
Lower clocks would bring benefits in terms of parts cost reduction and lower power draw. Applies to single stack designs as well. But low interface clocks are already an inherent advantage of HBM over GDDR5, the power savings will be relatively modest. But still.

Over the lifetime of a console introduced in 2017, it's not obvious exactly how relative costs will change over time.
 
HBM2 isn't much of a worry either since the size of the interposer would be small, you could get away with using only a single stack (256GB/s, max 8GB), and could avoid driving any off-interposer DRAM at all.
HBM2 only doubles HBM1. You need 4 stacks to have 8 gigabytes.
 
Gb is gbit. gbyte == gbit / 8
I can assure you I'm aware of that.
SK Hynix has a number of publicly available DRAM product pages (they make DRAM with different interfacing, obviously) which cite a number of products. Their DDR4 might be the highest density, they don't list higher than 4Gb for GDDR5 for instance, although those product sheets have been around for a while. Seriously, 8Gb DRAM dies is not going to be a problem, certainly not in the time frames we are talking about.

Of course, the whole idea of NX using HBM hinges both on the NX not being a handheld device, and on AMD being contracted to supply the APU.
It is remarkable that cash strapped AMD is instrumental in bringing HBM to a production stage. They may well be the only ones using "HBM1", and maybe only for a single product. It implies that they have much farther ranging plans for HBM than just high-end discrete GPUs. It is great technology for mobile GPUs and APUs for instance, in these cases with just one or two stacks. This relates to a stationary NX in that given the design costs for upcoming nodes, it just makes financial sense to apply whatever design work you need to do as widely as possible. I'd expect Nintendo to piggy-back on designs AMD is doing under any circumstances, with some relatively small targeted modifications. Likewise, I assume AMD will charge them well for that in order to cofinance work they would have to do anyway. Both companies win. Cost is minimized. Doing a more custom design on a more optimized 28nm process is possible of course, but it's doubtful that it would be cheaper in the long run, and doesn't yield much in terms of synergy for where AMD is heading in that time frame and forward.
Anything is possible of course, but for a product that is designed by AMD for shipment to consumers in 2017, 14/16nm FF + Single stack of HBM is at least a decent candidate.
 
That demo looks awful for Mario game.

I like that Nintendo did not use twilight princess style for Zelda U and made something new and good. Skyward Sword artstyle was good too.
 
It's not about art style. I'm pretty sure those are sample assets that come with Unreal.

It's more about the lighting and rendering quality. Imagine that but with Nintendos art style.
 
Mario does not need Chromatic aberration.
The only thing Nintendo titles need are 1080p with good AA and AF.

Nintendo can not produce Uncharted level assets anyway. Zelda U trees looked awful and copy-pasted.
 
Mario does not need Chromatic aberration.
The only thing Nintendo titles need are 1080p with good AA and AF.

Nintendo can not produce Uncharted level assets anyway. Zelda U trees looked awful and copy-pasted.

Who mentioned Chromatic Aberration or Uncharted? As McHuj said, this is about the quality of the graphics, not the art style. This level of graphics + Nintendo's art style would be like playing a Pixar movie - featuring Mario. That alone would be a major system seller and probably propel Nintendo back into competitiveness again.
 
Who mentioned Chromatic Aberration or Uncharted? As McHuj said, this is about the quality of the graphics, not the art style. This level of graphics + Nintendo's art style would be like playing a Pixar movie - featuring Mario. That alone would be a major system seller and probably propel Nintendo back into competitiveness again.
I'm just saying that demo video is awful. It has nothing to do with Mario. You could just show Unreal samples and say what if NX could produce them.

It's better to look at new Ratchet to imagine new Mario.
 
Wiizard


hm... actually, I have to wonder if it'd be better use of the interposer area to have two stacks e.g. take the Fiji turtle layout and chop the sorry ninja in half from head to butt so you have an arm and a leg on one side of the chip.

UJYioIc.jpg


----

I suppose HBM is designed for 1024-bit width per stack, but they can also downclock significantly according to what the APU demands.

-----

2017 would probably be inexpensive enough for the likes of Nintendo.

Can they put more than 4 stacked chips around the GPU? And what about using all 4 sides and not only 2?
 
Status
Not open for further replies.
Back
Top