Nintendo Switch Event 2017-01-12 and Switch Launch discussion

Status
Not open for further replies.
1 - The Snapdragon 820 shouldn't even be compared to the TX1 because it's a SoC for smartphones with a good deal of its power and heat budget dedicated to WiFi, Bluetooth and increasingly large LTE-Advanced baseband processors.

2 - The S820 in 5" smartphones is only "handily outperformed" by the Shield TV that has a heatsink with a heatpipe and is plugged to the wall, or is on par with a 4 times larger Pixel C.

3 - Even considering 1) the Snapdragon 820 would handily outperform a TX1 if both were put inside the Switch's power and heat envelope, at least in handheld mode.

4 - Again considering 1) and the volume that a handheld from NIntendo guarantees, why should they go with an off-the-shelf SoC that was not made with gaming as a priority?

5 - When it released, the Vita (sold for $250) had a performance comparable to the A5X that released months later. The fastest commercially available SoC was the MSM8960, with a Adreno 225. Care to see how that compares to the A5X?



I'm calling it old and slow if it's just a downclocked TX1. That's the whole basis for the criticism.
Perhaps it's not. I sure wish it was something far more gaming-focused (and consequently more powerful).
I sincerely doubt that the 820 would be faster than the TX1 under similar circumstances. That isn't supported by either gfxbench or Futuremarks benchmarks.
And that is without even getting into the benefits of having nVidia supplying API and tools.

Unfair to single you out Totten since you're by far not the worst offender, but I generally do feel that it is time to stop pissing on the Switch for not being a stationary design. It is what it is. The exact innards will be interesting to know in greater depth eventually.
 
Yeah while I am quite critical of Nvidia mobile SoC design in general (too hot, no integrated radios, etc) for the purpose of a portable gaming handheld TX1 is probably the best deal available to Nintendo. I think a lot of the disappointment I've seen stems from a "console first" attitude which reflects how NoA in particular has talked about the device. If you look at it from a "portable first" stand point it is the most powerful handheld to date and is a substantial upgrade over both Vita and New 3DS.

I still have huge issues with battery life, peripheral costs and the frankly bizarre app integration stuff but that could just be terrible Nintendo comms striking again if it turns out that the app is a companion to console functionality, if it straight up replaces that functionality on the console then all bets are off.
 
I sincerely doubt that the 820 would be faster than the TX1 under similar circumstances. That isn't supported by either gfxbench or Futuremarks benchmarks.
And that is without even getting into the benefits of having nVidia supplying API and tools.

Unfair to single you out Totten since you're by far not the worst offender, but I generally do feel that it is time to stop pissing on the Switch for not being a stationary design. It is what it is. The exact innards will be interesting to know in greater depth eventually.
The X1 is weaker than the latest next gen mobile tablet-smartphone designs when it comes to multi-threaded core benchmark.
Also certain GFX benchmarks if you do compensate for the full power of the Shield TV compared to Tablets/upper smarphone show they are pretty competitive against the TX1, this is only going to get worst as manufacturers get to grip with latest Adreno and Mali that are finally being better optimised for gaming and engines.
The Samsung Galaxy Tab S3 (GFXbenchmarks show it with the Adreno 530) is trading blows with the Google Pixel C (with the X1) in the GFXBenchmarks pertaining to Manhattan/T-Rex.
Considering the clocks of Switch when mobile, I am not sure it is unfair to raise performance of next gen tablets/smartphones even though I agree one advantage for Switch is the Nvidia related tools/API and those benchmarks are not actual games.

Cheers
 
Last edited:
If Nintendo have secured Nvidia assistance in ongoing driver and API work, and in 16 and 10 nm shrinks for the device over time, I think they may have a pretty solid deal regardless of what the latest and greatest smartphone chips can do (within a phone power envelope).

Nintendo could conceivably add dedicated stationary and mobile models to the Switch line, bringing purchase price down for customers that know exactly what they want.
 
...this is only going to get worst as manufacturers get to grip with latest Adreno and Mali that are finally being better optimised for gaming and engines.
Nintendo are hardly going to be able to implement a design now that'll hold the performance crown for the next three years, are they?! Also, as mentioned elsewhere, Switch needs to maintain peak performance constantly unlike the sprints of mobile benchmarks shortly before they throttle back to what you actually get trying to play games for any length of time. Every comparison should be made based on an hour's play and not any benchmarking tool to get an actual real-world average performance instead of peak figures.
 
Also, as mentioned elsewhere, Switch needs to maintain peak performance constantly unlike the sprints of mobile benchmarks shortly before they throttle back to what you actually get trying to play games for any length of time. Every comparison should be made based on an hour's play and not any benchmarking tool to get an actual real-world average performance instead of peak figures.

We do have those already, though. After a long-term usage, the Snapdragon 820 seems to cut the GPU frequency by close to 30%:

1YwgYNe.png
K7jd6Je.png


Looking at the GFXBench results, the Snapdragon 820 at 70% of the performance would provide a whole lot better results than the TX1 at 30% of the performance.

On Manhattan 3.1.1 Offscreen at 1440p (the most demanding "offscreen" test in the suite that probably gains the most from TX1's 16 ROPs), the 1GHz Shield TV gets 25.4 FPS, whereas the Snapdragon 820 models tested "in cold" at 100% GPU frequency get around 16.5 FPS.

That said, a TX1 with the GPU at 300MHz would do ~7.6 FPS, whereas the Snapdragon 820's GPU at 70% would do 11.55 FPS.
The S820 would be 35% faster, and again: the S820 is not a gaming oriented chip.


Moving to the Manhattan 3.1 1080p test, the Shield TV gets 44.6 FPS and S820 devices get 32.5 FPS.
44.5*0.3 = 13.5 FPS
32.5*0.7 = 22.75 FPS

In the less demanding 1080p test, the difference would grow to 40% in favor of the S820.



Regardless, I'm convinced this isn't because nvidia is incompetent at making mobile GPUs or Qualcomm is much better. Qualcomm is simply using 14FF which is a lot more power efficient than the TX1's 20nm.
The TX1 is just old. That's all.

I'm pretty sure nvidia could have designed a gaming-specific 16FF SoC at close to 100mm^2 that would wipe the floor with the S820 in 3d rendering at the same power/heat envelope.
That Nintendo+nvidia apparently chose not to do so (at least according to Eurogamer, which is somewhat corroborated by the games' visuals) is what ticks me the most.


Nintendo are hardly going to be able to implement a design now that'll hold the performance crown for the next three years, are they?!
It wouldn't have hurt to implement a design that would hold the performance crown over last year's smartphones either...
 
I think the issue at stake here is how and where Nintendo has chosen to invest in Switch, from what I can see they appear to have taken the most powerful solution already on the market that requires minimal h/w engineering to focus their investment instead on the APIs, O/S and other assorted tools necessary to develop for Switch. We know Nvidia has needed a big win for a while now in the mobile space so they may simply have been more willing to assist Nintendo with these tasks than say a Qualcomm or Samsung who are already pretty busy.

To my mind if the eventual die shots confirm minimal alterations to the TX1 in Switch then the key question for Nintendo was who would help them the most in the other areas. I'm trying to avoid inflammatory words like "cheap" because despite tossing them around before I do see this as a sound business choice. Now if the teardown reveals an extensive redesign of TX1 then all bets are off as it would be perfectly reasonable to ask "why change all of this but not this".
 
On Manhattan 3.1.1 Offscreen at 1440p (the most demanding "offscreen" test in the suite that probably gains the most from TX1's 16 ROPs), the 1GHz Shield TV gets 25.4 FPS, whereas the Snapdragon 820 models tested "in cold" at 100% GPU frequency get around 16.5 FPS.

That said, a TX1 with the GPU at 300MHz would do ~7.6 FPS, whereas the Snapdragon 820's GPU at 70% would do 11.55 FPS.
The S820 would be 35% faster, and again: the S820 is not a gaming oriented chip.


Moving to the Manhattan 3.1 1080p test, the Shield TV gets 44.6 FPS and S820 devices get 32.5 FPS.
44.5*0.3 = 13.5 FPS
32.5*0.7 = 22.75 FPS
What does the S820 get in Vulcan optimised games? Do the higher CPU and GPU utilisations see it throttle more.

It wouldn't have hurt to implement a design that would hold the performance crown over last year's smartphones either...
Don't understand the point of repeating your position on this, especially when it's one most of us agree on. But even then, it'd have cost more, so it would have 'hurt'. Nintendo have made a business decision. We'll have to wait and see if it was the right one. Reality is that the differences on screen from buying a TX1 versus a bespoke solution probably wouldn't justify the $n00 million dollars investment, plus, as Lalaland says, there's nVidia's toolchain that is a valuable part of the package.
 
What does the S820 get in Vulcan optimised games? Do the higher CPU and GPU utilisations see it throttle more.
Are there any Vulkan games in Android? Honest question because I really don't know of any. Talos Principle supports Vulkan in the PC version so it might use Vulkan in the Android version too.. though that "Android 4.4" requirement leads me to believe it's not (Vulkan is officially supported only in Android 7.0+).

Anyways, according to what it's being discussed here, using Vulkan would allow the CPU to do the same job at lower frequencies, hence lower power/heat budget from the CPU cores that could instead be allocated to higher GPU clocks.

Of course, this would apply to both Snapdragon 820 and the TX1.


Don't understand the point of repeating your position on this, especially when it's one most of us agree on.
I only meant that sentence as a continuation of the same argument about the TX1 not being more powerful than the S820 at a similar power/heat envelope. Sorry if it somehow seemed personal.
 
Not a surprise to anyone, but Nintendo branded items are more expensive...

http://www.forbes.com/sites/insertc...s-cost-3x-more-than-normal-ones/#449aefd85bc0

And now, Nintendo has partnered with HORI for “official” Switch-branded 16 GB and 32 GB SD cards. While it may be logical that Nintendo would sell official cards (these are for the Japanese launch, but there may end up being Western equivalents) the problem is in the price.

The 32 GB card is ¥7,900, or about $70 USD. But on Amazon, the “high end” 32 GB SD card from Sandisk is ¥2,690, or about $23 USD. Past that, if you want a cheaper version with the same amount of storage (but with slower read/write speed perhaps), you can find one for as little as ¥1,180/$10.


All of this is to say that it is probably not the smart play to purchase the officially branded SD cards because of this rather severe price mark-up. Paying three times as much as necessary just to get the official Nintendo seal of approval isn’t worth it, and I’m not sure why the price point is what it is here. Keep in mind these are not proprietary memory cards like what we’ve seen with Sony handhelds. Any SD card will work in a Switch.
 
Not a surprise to anyone, but Nintendo branded items are more expensive...

http://www.forbes.com/sites/insertc...s-cost-3x-more-than-normal-ones/#449aefd85bc0

And now, Nintendo has partnered with HORI for “official” Switch-branded 16 GB and 32 GB SD cards. While it may be logical that Nintendo would sell official cards (these are for the Japanese launch, but there may end up being Western equivalents) the problem is in the price.

The 32 GB card is ¥7,900, or about $70 USD. But on Amazon, the “high end” 32 GB SD card from Sandisk is ¥2,690, or about $23 USD. Past that, if you want a cheaper version with the same amount of storage (but with slower read/write speed perhaps), you can find one for as little as ¥1,180/$10.


All of this is to say that it is probably not the smart play to purchase the officially branded SD cards because of this rather severe price mark-up. Paying three times as much as necessary just to get the official Nintendo seal of approval isn’t worth it, and I’m not sure why the price point is what it is here. Keep in mind these are not proprietary memory cards like what we’ve seen with Sony handhelds. Any SD card will work in a Switch.


you can get a 200gig micro sd card from amazon us for $70
 
What about the read/write speed of those 200gig micro sd card for 70$... A lot of sd cards are slow as hell...
 
What about the read/write speed of those 200gig micro sd card for 70$... A lot of sd cards are slow as hell...

I have one of those (Sandisk Ultra 200GB Class 10) in my Surface 4 Pro. Here are my Crystalmark results:

6ntBFMg.png


It's formatted in exFAT.
They're pretty decent, with a sequential read quite close to the advertised 90MB/s max.
I have my Steam installation in that card and it behaves quite well with the games I tried so far.
Game installs take a bit long, of course, but that's a one-time concern.
 
I don't. I just wonder. Because SOME sd cards are terrible, even with good specs on paper.

great thing about amazon are a lot of users post their benchmarks in the reviews. There are faster 64 gig cards that cost $30 they are roughly twice as fast at reads around 150MB/s so if speed is that important vs storage you can go that way too . I wonder the speed of the hori Nintendo ones

https://www.amazon.com/Lexar-Profes...qid=1485807407&sr=1-21&keywords=micro+sd+card

Lexar professional 64gb uhs II $33 128 gig $80 . If Nintendo supports uhs-II those would be the fastest .
 
After a long-term usage, the Snapdragon 820 seems to cut the GPU frequency by close to 30%:
What's about some real workloads with CPU usage, isn't this the case for a handheld?
Unfortunately, GFXbench doesn't put any pressure on CPU, it's a GPU benchmark after all. Would S820's GPU perform at the same frequency with 4 CPU cores at work? I doubt.

the Snapdragon 820 at 70% of the performance would provide a whole lot better results than the TX1 at 30% of the performance
It doesn't work like that. Performance doesn't scale linearly with frequency, especially in such bandwidth bound workloads as deferred shading @ 1440p, i.e. Manhattan 3.1

The TX1 is just old. That's all.
It's also cheap, has the most reach feature set with DX12 FL11.1+ features and many features on top of that, has the best image quality in GFXBench, and has the potential for 2x perf improvements with 16nm shrink, does S820 have something of this? Why didn't you mention this?:rolleyes:
 
What's about some real workloads with CPU usage, isn't this the case for a handheld?
Unfortunately, GFXbench doesn't put any pressure on CPU, it's a GPU benchmark after all. Would S820's GPU perform at the same frequency with 4 CPU cores at work? I doubt.

The CPU cores aren't idling during the benchmark. Truth be told, most tests done in desktops will show very similar total power consumption between e.g. 3dmark and a game.

It doesn't work like that. Performance doesn't scale linearly with frequency, especially in such bandwidth bound workloads as deferred shading @ 1440p, i.e. Manhattan 3.1
Both SoCs use LPDDR4 3200MT/s.
Regardless, for the very few who didn't immediately reach that conclusion: those were very rough estimates.



It's also cheap
If it was cheap we'd see it in chinese Android boxes, or chinese Android handheld consoles like there are with the TK1.
The TX1 is only available in a flagship tablet and an Android box that costs 2-4x more than the typical Amlogic/Rockchip/Mediatek box.
It's not a cheap SoC.


has the most reach feature set with DX12 FL11.1+ features
Assuming you mean "rich", it's irrelevant. The Switch won't use DirectX.


and many features on top of that
Of which you have no idea how they compare with the Adreno 500.

and has the potential for 2x perf improvements with 16nm
If it's a TX1 then there's no magical 16nn.
 
The CPU cores aren't idling during the benchmark
And what are they doing then? Calculating some benchmark logic or what?
The only purpose of CPU in mobile benchmarks is to submit a shy number of draw calls from a single core due to GL ES, that's it.
I've tested CPU load in quite a few apps, including GPU benchmarks, 3DMark uses 50% on average of a single core on Tegra K1, the same is true for GFXBench.

Truth be told, most tests done in desktops will show very similar total power consumption between e.g. 3dmark and a game.
Desktop benchmarks are different, there is no power consumption sharing between discrete GPUs and CPUs on PC, so it's perfectly possible to put some physics on CPU even in graphics tests without making a trash instead of test.

Regardless, for the very few who didn't immediately reach that conclusion: those were very rough estimates.
They are not only rough, but don't make any sence at all

If it was cheap we'd see it in chinese Android boxes, or chinese Android handheld consoles like there are with the TK1.
It should be cheap in production, but selling price is another matter. Why would Nvidia sell their chips for the same price as Mediatek?
Chinese companies are simply buying of the shelf IPs, integrating them into a SoC and that's it, they can sell it at min price, this is not the case with neither Nvidia or Qualcomm, but Nvidia can sell TX1 with discount for Nintendo, that's for sure.

The Switch won't use DirectX.
Thanks Cap, your boat is waiting for you.
Off cause it won't, Switch will use NV made API, and NV implements its features right from the start, for example, famous shader warp level intrinsics (shuffles, ballot and other) have been available for years in nvapi for DX11 usage.
Why do you think they would miss these shuffles, ROVs usefull for OIT, Conservative Raster usefull for voxelization/clustered shading/cone traced shadows, multiprojection usefull for free dynamic cubemaps/voxelization/multi resolution shading, programmable sample positions usefull for AA and checkerboard rendering, Device generated commands usefull for CPU offload and many other top notch features in NVN?

Of which you have no idea how they compare with the Adreno 500.
I have a clear idea that Adreno 500 doesn't support listed above features

If it's a TX1 then there's no magical 16nn.
Don't get it, if it's TX1 then they can simply port it on 16nm any time they want
 
Status
Not open for further replies.
Back
Top