I just have to ask.. why? Why would Nintendo think this is a good idea? Or to elaborate, in what way would a devkit that trades in realistic performance for realistic battery capacity and form factor benefit developers? We know that Nintendo can put together a Tegra X1 development platform that clocks a lot higher than 1.02GHz on the CPU side or 302MHz (or even 768MHz) on the GPU side. We know that Shield TV can do a lot better than that. It just uses a lot more power. But why would developers care? They wouldn't want to use a battery powered dev kit to begin with, nor will they care if it's bulkier or heavier than the real thing. What they will care about is making the games perform as well as they possibly can on the final hardware, and that means dev kits that are clocked as closely to the final thing as possible.
Now I've raised the point, what do you think is happening here? A fake because it's easy to troll? A hardware that all GPU and no bandwidth? That the whole expected memory subsystem is wrong and there's way more BW than that?
Nintendo has done stupider things in the past (e.g. everything about Wii U).
Possible explanation for having a severely underpowered devkit? Nintendo wanted devs to be able to experiment the console on its various modes (docked, undocked with coupled controllers in hand, undocked in a table with decoupled controllers) while taking notions of battery life, UI size of several elements when 2 players are looking at the 6" screen at the distance, ideas for gameplay, etc. And this would come before sheer power, at least for the first generation of games.
So yes, this would be a devkit showing the console's final physical form, for developers to think of gameplay ideas and functionality before thinking about showing the best graphics they could squeeze out of the system.
Does this sound believable coming from Nintendo?
The claim that it was performed at 480x320 also seems very weird, why would anyone use that resolution with this device?
To avoid fillrate + bandwidth constraints while trying to measure compute performance?
It's a text print out - easiest thing to fake!
Yes, but many things make sense, like clock speeds if it's a 16FF chip compared to the 20nm chip in the Shield TV.
I'll answer your question below so please answer me this one:
- Given TX1's specs and clocks at 20nm, you don't think nvidia could develop a gaming-specific 16FF SoC with a GPU carrying 4 SMs that clock at 400MHz undocked and 1GHz docked with active cooling? Same thing with some Cortex A72 (or even Denver?) cores that clock between 1.8GHz and 2.1GHz in undocked/docked modes respectively?
In "undocked mode" with low power and no fan, the Pixel C gets by with a 20nm SoC at 4*A57 @ 1.9GHz + 2 SM @ 850MHz for a while before throttling, but a hypothetical 16FF SoC with 4*A72 @ 1.8GHz + 4 SM @ 400MHz sounds unrealistic?
In "docked mode" with power coming directly from the wall and active cooling, the Shield TV does 2GHz CPU + 1GHz GPU for a while before throttling, but the same hypothetical 16FF SoC with 4*A72 @ 2.1GHz + 4 SM @ 1GHz sounds unrealistic?
Regarding bandwidth, I don't think Wide I/O 2 or even quad-channel LPDDR4 should be left aside for a custom gaming SoC.
Sure, the odds right now are tipping in favor of the production units bringing just a boring old TX1, but there's conflicting info about it.
For example, the specs described in this rumor/leak would be a perfect match for Unreal Engine 4's performance profiles.
And another thing that doesn't make sense is how nvidia is about to launch a Shield Console 2 (the handheld) with a Tegra X1 and 4GB LPDDR4.
They filed for FCC approval in July 2016, so well after the Switch contract was made.
Nintendo is just going to let them do that and risk a direct handheld competitor from nvidia with the
exact same processing hardware, possibly at a lower price point and a better portfolio at launch?