Nintendo announce: Nintendo NX

Status
Not open for further replies.
The sad thing is we act like PS4 is some high bar to clear. These consoles have us tricked. A 2.6 teraflop GPU is definitely low end on PC. 4 teraflops barely gets you midrange status anymore. R9 290 is supposed to MSRP for $250 (although street price seems a bit higher) and is 5 teraflops (40 CU's @~1ghz) .

I know huh, but relativity is what Nintendo will probably go for. The days of SNES and N64 power days are likely gone. 2 years after X1 and PS4 launched, the NeXus better be on that level at least unless Ninty wants a Wii U repeat.
 
I'd read on another forum that GF was dropping PPC and that Nintendo might have to find manufacturer if they want to update the current design (add more cores, etc). Wasn't sure how true this is.
IBM would be contracted for any further refinements to the design. Global Foundries would just be the manufacturing partner.
 
The sad thing is we act like PS4 is some high bar to clear. These consoles have us tricked. A 2.6 teraflop GPU is definitely low end on PC. 4 teraflops barely gets you midrange status anymore. R9 290 is supposed to MSRP for $250 (although street price seems a bit higher) and is 5 teraflops (40 CU's @~1ghz) .
I disagree low -end is lower than what the Ps4 or the xb1 push on a Full HD screen. There are people trying to play with APU and/or sub 100$ GPUs, more often than not the target display is not Full HD, that is low-end. In the PC realm there is high-end and lots of card, not that expensive, are doing great at driving a full HD display or more. THere is something above that but I would not call that high-end but luxury, extreme geekiness with an overspending tendency or wealthy way above average.
The truth is the industry (applies to TV) is producing too much great products and companies are searching hard to create incentives for their customer to upgrade their equipments be it display or GPUs (or TV). Increasing the screens resolution (and so overall hardware requirement) seems to be the universal answer and that is for a reason, it is the easiest path. We are moving to 4K whereas it seems to me that the path forward looking at our optical system system (eyes and brain) performances would have been to further increase the screen format ratio and go with wider screen. 21/9 was tried recently it looked neat.
The sad matter of fact is for such a formats to gain traction it means that TV channels would have to renew a lot of their material (people dislike black bands) and video camera, etc. => 4K is easy and ultimately the crap broadcasted quality hardly changes (more often than not it is pretty bad).
For game is sort of the same , the assets costs already a fortune, displaying them in slightly higher resolution is an easy solution even in real world usage case such increase really hits diminishing returns. For display and GPUs manufacturers it is easier (and more profitable) to sell more than one display and an ad-hoc number of GPUs to drive those screens to customers that deals with electronics CE devices as luxury items. Widening the screen would have a stronger effect on the whole user base and create more incentives to upgrade. THings is you need volume to get adoption, so you need TV, with TV comes TV channel, optical media players, etc.

Sorry for the OT, but it seems to me that both the PS4 and the XB1 are great improvement, the amount of RAM those devices got is impressive, especially since price got up and the amount of RAM went down in low end desktops and laptops (not gaming), performances are great compared to the power dissipated (they were at release). I do not get why some constantly downplay what they achieve, it is not like better is so easily doable with a reasonable price or TDP.
 
Last edited:
Yep. A 750 Ti is comparable to consoles, and definitely not 'low end' (priced about $150 instead of <$40 cards). I don't think 2.6 TF is possible for less than $200 retail. Factoring in form factor etc., I think Rangers is being very unrealistic.
 
I disagree low -end is lower than what the Ps4 or the xb1 push on a Full HD screen.

There is no fixed definition of "low end" and "high end" so it's a bit of a pointless debate. If we talk in terms of the latest generation of discrete GPU's only then certainly low end is at least in the same ball park as the consoles but when you add in APU's and older generations of GPU's into that mix then the consoles would quickly exceed the low end. It all depends on how you define "low end" and that comes from what market you're referring too. Given the NX would likely use an APU then you could argue that not even "bleeding edge" in the PC space can match the XBO at this point. Although if they were to use a 14nm custom APU from AMD with HBM2 support then they could no doubt get something that lays some serious smack down on the PS4 while also drawing less power and for a similar cost.
 
There is no fixed definition of "low end" and "high end" so it's a bit of a pointless debate. If we talk in terms of the latest generation of discrete GPU's only then certainly low end is at least in the same ball park as the consoles but when you add in APU's and older generations of GPU's into that mix then the consoles would quickly exceed the low end. It all depends on how you define "low end" and that comes from what market you're referring too.
It is sure that there are no definition, but you have price range, and when it comes to console we should consider the mobile version of GPU and CPU alike. My Alienware Alpha uses mobile/low power/laptop parts, definitely you do not get the same performance per mm2 (hence Dollars) once power consumption is thrown into the mix.
Given the NX would likely use an APU then you could argue that not even "bleeding edge" in the PC space can match the XBO at this point. Although if they were to use a 14nm custom APU from AMD with HBM2 support then they could no doubt get something that lays some serious smack down on the PS4 while also drawing less power and for a similar cost.
Well HBM will provide a hell lot of bandwidth, way more than you need or than the GPU will be able to consume so I would not bet on beastly improvement. I will save power though. As an aside I dislike HBM philosophy, power is the leading concern nowaday, to save power your primary concern is to move as few data as possible off chip. The best way to do that is cache, first external then internal. GPU, APU needs fast L3 or L4, Intel choice wrt to Crystal Well is sound. There are a lot of things to do before moving to HBM, greater compression techniques (on chip and off chip) greater texture compression (patented though...), tiling architectures. Then you want to save power from you memory (DDR4 should help).
I've read a little about Mali/Midgard and whereas I do not get it all, it let me with the sentiment that ARM choices are the good ones and it will pay more and more. Personal bet they will be the leader in GPU performance in the years to come.
HBM significantly lower the power used by memory but it seems to be a side effect more than the desired feature of the tech, my understanding of the technology is that it has been developed following ideas and paradigms that date from prior our collective collision with the "power wall".
 
Last edited:
So if we are to assume that the price Nintendo decides to shoot for is sub $249, and potentially as low as $149, what type hardware components would we be looking at? We pretty much know Nintendo will be going with an APU from AMD, but does AMD have anything that is low cost in this category?
It seems that NX may probably lose the backwards compatibility of WiiU (either x86 CPU or ARM-based)?
 
It seems that NX may probably lose the backwards compatibility of WiiU (either x86 CPU or ARM-based)?
Yop I missed that but somebody re-post their last statement (in the next gen thread) and for what it is worth they said they are starting from 0.
 
But they can not do this. They can not make a 2xPS4 at $299 in 2016-17.
Why?
Assuming GCN similar to todays, AMD should definitely be able to cram in tahiti amounts of shader cores + their little x86 cores in under 200mm2 on Samsungs 14nm process, either LPP or the upcoming more cost optimized version. Lets assume $7000/wafer (It will likely have dropped by then, both due to maturity, and as their biggest early process adopter(s) will be looking to move on to 10nm). 200mm2 die area fits 300 dies to a wafer. Assume worse than real life yields - only 2/3 of the dies are OK => $35/die add costs for test and packaging et cetera. Lets ballpark cost per useful APU at $50. (This excludes development and licensing costs). Next major item will be memory. By then 8Gb GDDR5 will have been produced for two or three years by Micron, Samsung and Hynix. Would be surprising if that would cost more than (ballpark again of course) $50 (maybe a single stack of HBM could be competitive overall). Ditch optical drive, (distribute on Flash), ditch hard drive (use inexpensive Flash-drive, expand via USB3 if customer so needs) add $40 or less for 128GB or possibly 256GB, again distribution off Flash reduces need for built in memory, and this sucker needs to be cheap! Add networking and other small bits and bobs, power supply, cooling, controller and packaging. We should be at roughly $200 in total. Add retailer margins, and whatever I may have forgotten, and sell to consumer for $249. Tadaa! Side benefit - no moving parts except for fan.
So - yes I definitely believe Nintendo could launch a 4TF console with a Bill Of Materials well below $300. Development costs have to be accounted for somewhere, and won't be trivial, but that's normal.

I don't believe they will do this, much to the chagrin of most of us here. But that's not because they couldn't, the core of this device simply requires them to talk to their old partner AMD, and say "We want these specs, at this point in time", and then haggle for a bit.
 
My Alienware Alpha uses mobile/low power/laptop parts, definitely you do not get the same performance per watts once power consumption is thrown into the mix.

Do we know what the TDP of the consoles is? I'd assume either console would have a very hard time competing in overall power draw with the likes of a 750Ti paired with a low power Intel CPU such as the 4590T (total TDP 95w at ~equal performance) - obviously if coupled with other low power components as well. And if you ignore total power draw and just look at performance/watt then something like a 5775C + 980 (total TDP 225W at ~3x performance).

Well HBM will provide a hell lot of bandwidth, way more than you need or than the GPU will be able to consume so I would not bet on beastly improvement.

If it was using 14nm FF then they could probably go with more CU's plus a higher clock speed and still end up with a smaller die and less power draw. 22CU's at 1GHz on GCN1.2 coupled to a 2 stack HBM2 setup would offer up to 50% more core compute/texture performance than the PS4 and over 4x the effective bandwidth (a single stack would obviously be better here but would limit memory capacity) - as well as the other minor improvements GCN1.2 brings to the table. Such an APU would surely be within the same cost and power envelope 9 months or so from now as the PS4 APU was when it launched. Obviously if they include 4 Zen cores in there instead of the 8 Jaguars the overall size and power draw would still go up (but not necessarily higher than PS4), but CPU performance would likely be much higher, perhaps by as much as double.
 
Nintendo should hire MS to write them a WiiU emulator for their next system. They can't be lumbered with a 20 year old CPU design for their next system.
Microsoft didn't create an Xbox 360 emulator. They get access to or work with the publishers to recompile games source execution code.
 
If it was using 14nm FF then they could probably go with more CU's plus a higher clock speed and still end up with a smaller die and less power draw. 22CU's at 1GHz on GCN1.2 coupled to a 2 stack HBM2 setup would offer up to 50% more core compute/texture performance than the PS4 and over 4x the effective bandwidth (a single stack would obviously be better here but would limit memory capacity) - as well as the other minor improvements GCN1.2 brings to the table. Such an APU would surely be within the same cost and power envelope 9 months or so from now as the PS4 APU was when it launched. Obviously if they include 4 Zen cores in there instead of the 8 Jaguars the overall size and power draw would still go up (but not necessarily higher than PS4), but CPU performance would likely be much higher, perhaps by as much as double.
Either AMD made significant stride with an hypothetical GCN 2 or I see no easy fix to some GCN issues have with scaling and power efficiency.
I agree that you can make a better APU @14nm but I suspect using a HBM (1 or 2) will add a significant premium to the device in turn affecting how much hardware you can include in the said APU. Whatever you do you are likely to pull ahead using 14/16nm process with finfet, 20nm was sucky but 14/16nm is 2 node ahead of 28nm that is a lot more transistors per mm2, lower power, etc.
As for cost I don't know if you can hope for the saving new lithography used to bring on the table. THe saving might be lesser compared to the save area. Anyway I never said that you can't do better for the same price. What I said is that HBM seems to be the right answer to the wrong problem and that did not related specifically to AMD no matter the predominant place it has in its design.
\\EDIT for my previous post I meant perf per mm2 (and Dollars) which makes more sense than Watts in the context, you pay more for less pretty much. I edited it just so you know.
 
Last edited:
Why?
Assuming GCN similar to todays, AMD should definitely be able to cram in tahiti amounts of shader cores + their little x86 cores in under 200mm2 on Samsungs 14nm process, either LPP or the upcoming more cost optimized version. Lets assume $7000/wafer (It will likely have dropped by then, both due to maturity, and as their biggest early process adopter(s) will be looking to move on to 10nm). 200mm2 die area fits 300 dies to a wafer. Assume worse than real life yields - only 2/3 of the dies are OK => $35/die add costs for test and packaging et cetera. Lets ballpark cost per useful APU at $50. (This excludes development and licensing costs). Next major item will be memory. By then 8Gb GDDR5 will have been produced for two or three years by Micron, Samsung and Hynix. Would be surprising if that would cost more than (ballpark again of course) $50 (maybe a single stack of HBM could be competitive overall). Ditch optical drive, (distribute on Flash), ditch hard drive (use inexpensive Flash-drive, expand via USB3 if customer so needs) add $40 or less for 128GB or possibly 256GB, again distribution off Flash reduces need for built in memory, and this sucker needs to be cheap! Add networking and other small bits and bobs, power supply, cooling, controller and packaging. We should be at roughly $200 in total. Add retailer margins, and whatever I may have forgotten, and sell to consumer for $249. Tadaa! Side benefit - no moving parts except for fan.
So - yes I definitely believe Nintendo could launch a 4TF console with a Bill Of Materials well below $300. Development costs have to be accounted for somewhere, and won't be trivial, but that's normal.

I don't believe they will do this, much to the chagrin of most of us here. But that's not because they couldn't, the core of this device simply requires them to talk to their old partner AMD, and say "We want these specs, at this point in time", and then haggle for a bit.
Very flawed calculations.

Flash is very expensive (relatively, to both BD and HDD) Sony already uses 8Gbit chips and likely will do a 14nm revision next year. And how could Nin make a cheaper and more powerful console than Sony? CUH-1200 is very cost effective.
 
If they really want to sell a very cheap home console, the best way is to use the same SOC of their next handheld console.
I do not completely disagree as I think at some point I posted something in that vein in one NIntendo thread. If Nintendo invests in custom silicon they better make the most out of it. As it is there still is a pretty massive gap between the operating frequency in the mobile and desktop/console world. Bandwidth requirements differ too but one may play with bus width memory type and speed to even things out. Now if may be wasteful to disable possibly a lot of functional chips for the handheld so Nintendo could stick to using the same IPs blocks.

To me Nintendo priority should be to be cheap (it applies to both the home console and the Handheld), good for the price, and straight forward for the developers. I do not think that Nintendo should try to compete with PC or the PS4 and XB1, or even try to provide the lowest end of what one could call lowest end of nowadays PC gaming experience (newer consoles are glorified PC). They have their IPs, they need to be attractive to the big mobile/indies developers. They need to try concurrently different business practices, etc.
If the systems sells it will catch the attention of the traditional PC/Consoles publishers. What they need though is a sane amount of RAM and a decent CPU performances and good tools and documentations.

With regard to media and storage (cf Fehu's post) I'm left wondering, for a handheld keeping game card + SD card should be not be an issue. The home console is more bothering as we speak of bigger games (even if NIntendo were to set a reasonable limit). I'm not sure that is the place to discuss that.
 
Last edited:
I glanced at the Nvidia Shield TV today on Amazon. It sells for $199 includes 20nm Tegra X1 (512 gigaflop Maxwell GPU), CPU is the typical 4 a57 and 4 a53 split, comes with 3GB of LPDDR4, and the controller is included. I'm assuming this thing sells for a profit as I doubt Nvidia gets anything from software sales.

I guess Nintendo would have to try hard to come in at a higher price and with worse performance.
 
Status
Not open for further replies.
Back
Top