the mark of a good troll is picking believable numbers.
Well then at least if it turns out to be fake it's a good troll and not a bad one. I'm fine with that.
Believable specs can at least be discussed. Unbelievable specs wouldn't find their way here. Not by my hand, at least.
Not seeing your answer.
Sorry, poor last minute editing. I answered it above, not below lol.
Unless it's a question I missed?
I'd argue that goes the other way. Why would nVidia give Nintendo their flagship part which they want to sell their own hardware? Switch will be in direct competition with Shield. nVidia would be better off giving Nintendo TX1 and using TX2 themselves. Yet nVidia aren't using TX2 in their own handheld? Why?
Because a Nintendo-branded handheld is bound to sell a hundred times more than an android/nvidia-branded one, resulting in completely different levels of exposure both to developers and to consumer mindshare.
Willing to part with their latest and greatest is part of the reason why AMD has been so successful with their semi-custom parts in the console world. nvidia would do well to follow suite in this.
You don't need to have the thing that's been leaked to allow developers any of this. You can throw a screen on anything. You can attach controllers to anything, especially with a concept that even includes detachable controllers. You can attach a battery to anything,
You can do all that, but it's not the same.
If you really need all of this in one package you make something that's a lot bulkier than the final thing will be. The developers won't get quite the same experience as the end user using this, but it barely makes much of a difference. Being able to determine what kinds of assets, shaders, resolution, logic etc they can actually utilize and what frame rate they can target are much, much more important considerations.
(...)
Focusing on delivering the mechanical design/ergonomics/fit first and the function second is so incredibly backwards for a gaming device. Wasting huge amounts of time and engineering resources on this is insane. No I don't think "Nintendo is insane" is enough of an explanation, usually there's some clear rationalization to be made behind their questionable compromises.
This is you thinking like everything but an imaginative developer wanting to make something really different for an innovative platform.
Making the thing a lot bulkier, heavier, with less battery life could hinder whatever gameplay innovations that Nintendo wants developers to come up with. You may come up with something
because you're holding a device with the right weight and volume.
Regardless, what would you put in this bulkier and heavier devkit? A x86 CPU with a GM208? How much would
that last with a 4000mAh (or even 10 000mAh) battery, and how much would it cost to assemble?
Maybe the TX1 was indeed the best choice after all?
For a test like this the bandwidth needed should more or less be proportional to the compute, at least until you start hitting resolution thresholds where the caches make a huge difference. Generally speaking going with half the resolution but double the framerate would result in about the same bandwidth usage.
IMO for measuring GPGPU compute only, pixel count should be reduced as much as possible in order to reduce pixel shader cycles and bandwidth for the framebuffer. Otherwise you'll just get further away from the theoretical values.
But I digress, until anyone can explain why the test's reported ratio of what should be two constants (frame count and FLOP count) isn't itself a constant itself I see no point really arguing about the other merits of the claims.
Why should the FPS-to-GFLOPs ratio be constant?
You can count how many compute operations are being done when generating Fractals, but random fractal generation + viewpoint panning results in different times per frame.
I just ran that Julia benchmark in my PC about 3 times and my scores ranged between 510 and 550 FPS.
Did I miss something? Why is people assuming that the hypothetical 800 GFLOPs SOC would be limited to 25 GB/s BW? The real thing on th Switch is either a TX1 or that other something. If it's the other thing, unless I missed some crucial info, there's absolutely zero evidence pointing to 25 GB/s for that part...
Exactly.
I also don't understand why bandwidth is being questioned so much. If it's a different chip, it could (most probably would..?) have a completely different memory subsystem.
The only thing I'm fighting here is all the rigidness in what could or could not be. At this point, I care very little for what it turns out in the end.