If you really have no idea what a CPU does during 3D rendering then why are you questioning if it's doing any work or not?
I'm asking exactly because I do know what CPU is doing in these benchmarks. In GFXBench, there is no sound, no game logic, no runtime physics, there is however a scene setup, draw calls and driver, all on a single CPU thread with 50% utilization.
It's unfortunate that you are incapable of making sense of it, yes.
Yes, I'm incapable of making sense out of some random napkin notes, which don't prove anything anyway.
You definitely don't get why Nintendo is unlikely to just launch a new Switch with "
2x performance"
any time they want.
It obviously doesn't make sense to launch a new Switch with 2x perf right now because 16nm is the most expensive tech process right now, they will wait until others will move to 10nm and 16nm prices go down
The only time where it would be important to do something like that would be at release.
Why so? TX1 in Switch has a good perf right now, it's faster than S820 in console mode and we don't know whether S820 would have been any faster in Switch in handheld mode, we also know that Maxwell does well against GCN, there are no major perf holes, so it's possible to port many things to TX1, we don't know whether S820 is capable of handling modern games without major renders rewriting. Personally, I've already tried to port some UE4 demos on Shield Tabled with close to Switch handheld mode perf and it was super easy
This demo is mostly bound by a single CPU thread perf because of 2000-3000 draw calls per frame, there are 20 FPS on average with 720p, 4x AF, TAA, HDR, Bloom, dynamic shadows, nice filtering for shadows, screen space reflections, relatively complex materials and depth buffer physics for particles, this demo uses the general ES3.1 path without any NV specific extensions, so somebody can try to launch it on something with S820 to see if it's any faster.
300-768MHz on the GPU is clearly pointing to a 20nm from the start.
Switch will be based on 20nm chip for obvious reasons, nobody is arguing with this. What I'm talking about is the possibility of mid life 3DS like console update with updated cost reduced 16nm SoC.
It took Sony and Microsoft 3 to 4 years until they thought it would be appropriate to launch a mid-gen upgrade with better performance.
Because these consoles are based on new hardware.
(
the GTX1080 has worse results
There are always some questionable entries in every DB of every benchmark, why do you even compare with the most suspicious one?
Meaning all other 500 series GPUs probably do support DX12 as well
DX12 support really doesn't mean anything, feature levels is what matter because GPU can support "DX12" even without such basic FL12 stuff as bindless textures