It probably can't be understated enough for VR titles that need to hit even higher framerates. Not every game can nor should be a corridor ala Wolfenstein or Doom.+ 60 fps AssCreed, Fallout 5, Elder Scrolls 5, PUBG, Hitman etc etc
Well that GPU is still under what a PS4 Pro can do, never mind the Xbox One X. It would be silly to release a console with a huge jump in CPU power and decrease / slight increase in GPU power. Only if there was some groundbreaking use for CPUs in gaming.
Do you want console A with Ryzen 8 core and 10 Tflops, or console B with son of souped up 8 core Jaguar and 14 Tflops? I guarantee in private everyone will pick the latter (a few might claim otherwise in public forums, taking the politically correct tack about dynamism and AI and 60 FPS all that nonsense).
- Devs have been offloading stuff to the GPU for ages, but they still can't get to 60fps in many games. I doubt this will suddenly change, this isn't some magic bullet.
Also it is not always 2xCPU performance needed to get from 30 -> 60fps. There can be calculations (like the world simulation etc.) that are independent from the fps of the game (or at least should be). Even physics simulations don't need to update with every frame or the AI don't need to think again every frame (sometimes they already update even more frequently) if it really wants to reach a certain point, just the model/position must be updated.60 fps also requires 2x GPU performance. Faster CPU alone is not enough.
- Devs have been offloading stuff to the GPU for ages, but they still can't get to 60fps in many games. I doubt this will suddenly change, this isn't some magic bullet.
- Sony needs to get to 60+fps for as many games as possible so they can have more VR support. The obvious choice is to have a much bigger focus on CPU this time.
This is a choice, they can scale back things until they have a rock solid 60 fps. If they do not, they choose to not get 60fps. As Insomniac wrote a few years back its 60FPS vs prettier pixels basically.
You guarantee it? Okay.
My PC has a Devils Canyon CPU at 4.5 gHz (at least twice as fast as the X1X CPU) paired with 32GB of overclocked RAM and an SSD .... and a barely-faster-than-PS4 level GTX 680 (albeit at marginally overclocked 770 speeds). So you're wrong.
Okay, now you know you're wrong, how are you going to make good on this guarantee of yours? What's this guarantee of yours worth? What do I get?
Everyone but a tiny niche would choose a 14 TFLOPs console with higher-clocked 8-core Jaguars over a 10 TFLOPs with an 8-core Ryzen.
You didn't prove anything. @Rangers did write some might claim otherwise in public forums, which is exactly what you did.
Besides, all you have is an overclocked Haswell from mid-2014 and the top-end nvidia GPU from mid-2012.
For all we know, you could have purchased the GTX 680 for $500 at release time and the CPU last week on ebay for $50 to replace the Core i3 you had before that.
And neither did you say what you use your PC for. I'd say probably not just gaming, otherwise you wouldn't have 32GB RAM. So your choice of components for a PC isn't really proof of what you would choose for a gaming console.
I do agree with @Rangers. Everyone but a tiny niche would choose a 14 TFLOPs console with higher-clocked 8-core Jaguars over a 10 TFLOPs with an 8-core Ryzen.
40% extra GPU power would make prettier screenshots and videos, thus the games would sell better.
The CPU is proportionally losing power and area to the GPU in gaming consoles and pretty much every evolution in HPC we've seen during the last couple of years points to that trend to continue.
No one has been coming up with amazing new and innovating ideas about how to use CPUs instead of GPUs for task X or Y to make them more efficient.
How are you arriving at this trade-off
Somehow an extra ~30mm^2 CCX @ 7nm is equivalent to +40% shader/tex, an appropriate increase in bandwidth & associated cost?