Nintendo Switch Tech Speculation discussion

Status
Not open for further replies.
256 CUDA cores = 512 flop/cycle (multiply-adds, fp32). At 1 GHz this is 0.5 TFLOP/s. Main memory is 4 GB and bandwidth is 25.6 GB/s.

Xbox One GPU is 3x wider. 768 SIMD lanes = 1536 flop/cycle (multiply-adds, fp32). At 853 MHz this is 1.3 TFLOP/s. Main memory is 8 GB and bandwidth is 68 GB/s. Plus ESRAM of course.

Xbox One raw performance and bandwidth are both roughly 2.5x. Even if we assume that Nvidia's GPU is more efficient, we are looking at roughly 2x difference.

This comparison completely ignores the fast ESRAM memory on Xbox One. It will further increase Xbox One's bandwidth advantage. And Xbox One's main memory BW is already pretty low compared to 176 GB/s of PS4. Double rate 16 bit math will of course help the Nvidia GPU a bit, but this only increases ALU performance (in limited cases) and only helps in cases where you are not texture samping bound (2:1 ALU to TEX rate makes you easily TEX bound) or memory bandwidth bound (25.6 GB/s means that you often are BW bound).

If these specs are true, this console is not fast enough to run Xbox One 900p ports at reduced 720p resolution. Quality also needs to be slightly scaled down. But these are rumoured specs, hopefully the real specs are a bit higher. I would like to see a new Pascal based GPU.
 
Are there decent specs likely in a handheld? I'm doubting that. This'd have to be the most powerful handheld ever made, with the best silicon process and best battery tech to have any battery life, and that's completely unlike Nintendo. Most of the content shown is Wii U level stuff. I expect nVidia Shield class/PS360/Wii U level games, just on the go and at 1080p for TV.
 
Last edited:
Anyone know how credible Zlatan is over at Anandtech? He seems to imply that memory bandwidth is the biggest bottleneck compared to the X1 chip. He mentioned that overall the chip would be about 3x slower than the Xbox One, but also said the Tegra X1 is about 10x slower than the Xbox One. If he is right, that is a pretty big step up from the Tegra X1, and a very large step up from the Wii U. The video footage for Splatoon looked 1080p, far less aliasing than the Wii U, same goes with Mario Kart. I think we are seeing a respectable upgrade over Wii U, but still well short of PS4/X1. It is a mobile unit after all, expecting X1/PS4 levels of performance from a mobile device was never realistic.
 
Any chance they'll run the thing at higher clocks when docked? I'm guessing no due to thermal constraints, but...
 
Regarding the bandwidth..wouldn't it be using a very similar dual channel LPDDR4 setup as Parker?
- Cortex A57 is the last thing I'd expect to see in a late 2016 SoC in a handheld. That CPU core did not go well for power consumption at all. I think a ~2GHz Cortex A53 would make more sense, or e.g. 1.2GHz Cortex A72

The A57 is not as bad as you make it out to be. People are just hung up over Qualcomm's implementation of it in the S810.

The A57 implementation in the E7420 was very respectable from both a performance and power perspective.
I figured it'd just be Vulkan + extensions. Nintendo says they are trying to consider the future at the same time, so it'd be in their interest to be somewhat cross-platform compatible.
Yep..they'd do well to align with the direction Android is going in IMHO. 2 years down the line we'll see similar performance in phones.
 
Wouldn't 4gb memory be too little for ports?

There are already a bunch of phones with 6gb ram, wouldn't be that strange to go for 6 or 8gb for Switch.
 
Zlatan on the anandtech forums is saying it's pascal based.
I'm really hoping it's Pascal because that would most probably mean it's being done using a FinFet process, though that guy does lose a bit of credibility when he claims the Tegra X1 is 8-10x slower than a Xbone. It's probably close to 4x slower, at least in the form of the Shield TV.


The A57 is not as bad as you make it out to be. People are just hung up over Qualcomm's implementation of it in the S810.
The A57 implementation in the E7420 was very respectable from both a performance and power perspective.
You're right, though the Cortex A72 is much better and it isn't that new either.
Plus, I still think a LITTLE core would make a lot more sense for a console that requires continous operation than one of the big cores, which are designed to sprint during short periods of time.
For example, how much power would two Cortex A53 quad-core modules at 1.5GHz sip, using FinFet?


I think the fact that Bethesda is willing to port Skyrim to NX speaks volumes.
Skyrim is originally a PS360 game, so it could probably have run on the Wii U if there was demand for it.
Although Skyrim is identifiable by many more people because it reached cultural phenomenon status, I'd be much more reassured if they had shown Fallout 4 instead.

In the end, you didn't see a single multiplatform port from the current generation in that video.

Any chance they'll run the thing at higher clocks when docked? I'm guessing no due to thermal constraints, but...
If there's a fan (and there seems to be), they could technically turn the fan on only for docked operation, allowing for substantially higher clocks.

Is Nintendo in the position of asking the devs for yet another performance target, on top of their already-a-bit-late-to-the-party console?
I don't think so...


Are there decent specs likely in a handheld?
We haven't seen any gaming-focused FinFet SoC. All other FinFet SoC makers are occupying much of the space with baseband processors, camera ISPs and huge CPU cores that need to run single-threaded javascript very fast.
If we go by TK1 and TX1, then imagining a 16FF+ SoC with a GPU that gets close to 1TFLOPs FP32 isn't that far fetched. Of course, it would need 4 channels of the latest LPDDR4x (68GB/s I think?) to keep it from being bandwidth starved.

It wouldn't be cheap (so it's probably not on the Switch), but I do think a tablet SoC that could bite the heels of Durango would be technically feasible nowadays.
 
I'm no programmer and I guess I'm making a fool of myself now, but why not offer Vulkan and OpenGL on the Switch as well? I imagine Nintendo's own API can yield more performance, but I mean, if Vulkan/OpenGL would be "good enough" for some games, I'd imagine that Switch ports could get a piggyback ride alongside the PC versions
 
"Or 30 fps at 3840×2160 pixels"

So this thing will likely support 4K video output?
That's just the spec for HDMI 1.4.
I doubt it'll ever output at 4K, since Netflix mandates HDMI 2.0 with HDCP 2.2 to run their 4K content.
 
Memory bandwidth is incredibly low and will be the systems massive bottleneck considering the CUDA core count, plus with that small 2MB cpu cache, memory contention will be an additional real issue in games with alot of cpu memory access. Compromises so much on performance to achieve portability and formfactor. Wondering if they could have thrown in some GDDR5 and down clock it extremely aggressively to achieve that same bandwidth for "portable mode", and at full speed while docked with a bigger chassis, passively cooled when mobile, have a fan as part of the docking station to draw air from some rear air vent/holes when docked.
 
Last edited:
What if it's a 64 ALU GPU? Pascal can be that small. ;) This is Nintendo we are talking about.
 
Are there decent specs likely in a handheld? I'm doubting that. This'd have to be the most powerful handheld ever made, with the best silicon process and best battery tech to have any battery life, and that's completely unlike Nintendo. Most of the content shown is Wii U level stuff. I expect nVidia Shield class/PS360/Wii U level games, just on the go and at 1080p for TV.

I would say from what we know so far that nVidia's role exceeds the norm for a Nintendo hardware partner. With Wii U they licensed the GPU from AMD but put it in a rather esoteric chip made by a Japanese manufacturer, and tied to a separate IBM CPU chip on an MCM. This time around nVidia is doing the complete SoC and has put a lot of investment into the software stack, and is probably to thank for the level of third party support they're garnering.

If they're deferring to nVidia this much it's possible that they're also allowing nVidia to be more aggressive than Nintendo has been in their technology choices.

I don't expect some big boost in capability in the dock though. Nintendo's revealed the dock is for TV out and charging and is not "the system." I doubt they'd even bother with including active cooling in it and if they did I doubt it'd make that huge of a difference.
 
I would say from what we know so far that nVidia's role exceeds the norm for a Nintendo hardware partner. With Wii U they licensed the GPU from AMD but put it in a rather esoteric chip made by a Japanese manufacturer, and tied to a separate IBM CPU chip on an MCM. This time around nVidia is doing the complete SoC and has put a lot of investment into the software stack, and is probably to thank for the level of third party support they're garnering.

If they're deferring to nVidia this much it's possible that they're also allowing nVidia to be more aggressive than Nintendo has been in their technology choices.

I don't expect some big boost in capability in the dock though. Nintendo's revealed the dock is for TV out and charging and is not "the system." I doubt they'd even bother with including active cooling in it and if they did I doubt it'd make that huge of a difference.
Well, having nvidia on board for a console failed at least two times in history. This is one reason why MS or sony won't use nvidia technology in future consoles. Now nvidia catched nintendo and I hope nvidia is doing it right this time.

I really hope the switch is capable of delivering an image upscaled to 4k. I really don't want to have extra input lag because the scaler of a new TV get's active. This is a must have for a console that's coming out this year and beyond.
 
that seems to be Skyrim remaster, which is going to appear on PS4 and XB1 soon. And it looks fine to me.

nintendo-switch-skyrim.jpg

Don't look close into the rendering quality shown in this video. There is a healthy chance that they've shown a lot of PR bullshots. In any case, if PS4/Xbone can deliver Skyrim remastered in 1080p60, few-watt Switch will likely stick to 720p30.
 
We haven't seen any gaming-focused FinFet SoC. All other FinFet SoC makers are occupying much of the space with baseband processors, camera ISPs and huge CPU cores that need to run single-threaded javascript very fast.
If we go by TK1 and TX1, then imagining a 16FF+ SoC with a GPU that gets close to 1TFLOPs FP32 isn't that far fetched. Of course, it would need 4 channels of the latest LPDDR4x (68GB/s I think?) to keep it from being bandwidth starved.

It wouldn't be cheap (so it's probably not on the Switch), but I do think a tablet SoC that could bite the heels of Durango would be technically feasible nowadays.
At what battery life? Because that's the other key limiting factor beyond maximum power you can squeeze into a handheld. Shield TV can draw nearly 20 watts gaming.
 
At what battery life? Because that's the other key limiting factor beyond maximum power you can squeeze into a handheld. Shield TV can draw nearly 20 watts gaming.

Those 20W were measured at the wall, and with a 2W USB3 drive on top of the box's own eMMC. So assuming 80% PSU power efficiency, those 19.4W turn into 15.5W actual consumption. Minus 2W of the external drive it's 13.5W.
Moreover, since it's always plugged in the big A57 cores are constantly clocked at their maximum 2GHz values.
Switching those big cores for A53 ones at e.g. 1.6GHz would save them a handful of watts, so now we'd be at 10W? And then the FinFet transition should allow 2x performance at the same consumption.
That tablet is quite thick, so I think a 45Wh battery inside it wouldn't be an unreasonable fit. With a SoC consuming between 8 and 10W plus screen (2, 3W?) on a 45Wh battery would result in a 3.5 - 5 hour autonomy, which is actually the battery life that the first Vita the 3DS versions had.

Again, this would be expensive.



I'll say it, they should have gone PowerVR :p

Nah, they definitely should've gone Qualcomm. The Adreno GPUs are definitely kicking ass in terms of performance/area at a very competitive performance/watt.
 
Status
Not open for further replies.
Back
Top