Could next gen consoles focus mainly on CPU?

majority of CPU cycles still being leveraged in render code. If they code offload that to the GPU there would be plenty of CPU for other things.
 
I would like entire new and shiny CPUs to be dedicated only to gaming. Sony should take quad Jaguar from PS4 and use it again for main OS work. But that's more cost. Even adding 50 cents more "than needed" is too much in the long run.
 
+ 60 fps AssCreed, Fallout 5, Elder Scrolls 5, PUBG, Hitman etc etc
It probably can't be understated enough for VR titles that need to hit even higher framerates. Not every game can nor should be a corridor ala Wolfenstein or Doom.
 
Well that GPU is still under what a PS4 Pro can do, never mind the Xbox One X. It would be silly to release a console with a huge jump in CPU power and decrease / slight increase in GPU power. Only if there was some groundbreaking use for CPUs in gaming.


This pretty much covers it.

Dont we always hear in fact how GPU compute is becoming more and more important, taking over more CPU duties, and CPU's less so?

Basically you'll likely get a big honking GPU with some piddly little CPU's hung off it next gen, just like this gen. Whether those CPU's are AMD or ARM. Although I guess we could see something like Ryzen, I might even consider it a waste of resources. If you have X die area, better to spend more on GPU than CPU.

Do you want console A with Ryzen 8 core and 10 Tflops, or console B with son of souped up 8 core Jaguar and 14 Tflops? I guarantee in private everyone will pick the latter (a few might claim otherwise in public forums, taking the politically correct tack about dynamism and AI and 60 FPS all that nonsense).

The headline spec of the PS5 that people are expecting to get announced within 3 years to counter XB1X is gonna be GPU teraflops. If those come in even remotely below expectations, the wailing will be loud and inconsolable.
 
Do you want console A with Ryzen 8 core and 10 Tflops, or console B with son of souped up 8 core Jaguar and 14 Tflops? I guarantee in private everyone will pick the latter (a few might claim otherwise in public forums, taking the politically correct tack about dynamism and AI and 60 FPS all that nonsense).

You guarantee it? Okay.

My PC has a Devils Canyon CPU at 4.5 gHz (at least twice as fast as the X1X CPU) paired with 32GB of overclocked RAM and an SSD .... and a barely-faster-than-PS4 level GTX 680 (albeit at marginally overclocked 770 speeds). So you're wrong.

Okay, now you know you're wrong, how are you going to make good on this guarantee of yours? What's this guarantee of yours worth? What do I get?
 
It all depends on the details... Especially with power consumptions and possible downscaling/clocking required to hit thermal and power consumption footprints.

Are those FP32 or FP16 FLOPS?
At what speed could RyZen 8 core run?
At what speed could Jaguar+++ 8 core run?
 
over budgeting on CPU is going to be detrimental to the next console cycle. If you need more GPU power the CPU taking up that silicon will not help. Where as the opposite can happen and we can continually offload more CPU tasks to the GPU. All of our technologies and the direction of our technologies for games and other industries have been greatly about moving those operations over to the GPU.

Async compute, GP GPU, HUMA, execute Indirect.. RPM, command processor customizations, the move to have DX12 built directly into the command processor....

The trend does not point towards consoles to need a larger CPU, it feels like it's pointing to the opposite, a smaller CPU and an even larger GPU. Developers just need the technology and the time to switch over to this new mode.
 
  • Devs have been offloading stuff to the GPU for ages, but they still can't get to 60fps in many games. I doubt this will suddenly change, this isn't some magic bullet.
  • I don't think it makes sense to again choose a weak CPU and a very strong GPU in a world where there are mid-gen consoles. They can easily release a PS5 Pro with a much more powerful GPU, but they can't really do the same with a more powerful CPU, because this would be a lot more work for game devs when it comes to Pro patches.
  • AMD CPUs are currently extremely strong, while their GPU efforts are disappointing. They need to leverage this. They can't just ignore the best chips AMD has to offer in years.
  • Sony needs to get to 60+fps for as many games as possible so they can have more VR support. The obvious choice is to have a much bigger focus on CPU this time.
 
  • Devs have been offloading stuff to the GPU for ages, but they still can't get to 60fps in many games. I doubt this will suddenly change, this isn't some magic bullet.

This is a choice, they can scale back things until they have a rock solid 60 fps. If they do not, they choose to not get 60fps. As Insomniac wrote a few years back its 60FPS vs prettier pixels basically.
 
It would be exciting to think of software scaling up and not down. Hitman and Witcher 3 both drop under 40 fps in busy areas [edit: on the super clocked, 'enhanced' X1 Jag cores]. I want to see larger crowds with enhanced behaviors, not the same old shit.

Pubg can't hold 30 even on X1X, never mind 60. And Pubg has been a staggeringly big hit. Even the beta has shifted well on X1. There's a thirst for games that move outside increasingly hi-res corridors with the same 10 enemies...
 
Simple question, why?
CPU got less and less relevant in den last 10 years.
Yes they will get a big bump in cpu-power, but the focus is still the GPU. You can sell screenshots etc, but it is really hard to sell e.g. 60fps or something like that in a magazine. E.g. selling HDR is really hard if you can't show it somebody directly.

The really good question would be, will we really see a big difference like in other console generations?
PS1 -> PS2 big jump
PS2 -> PS3 big jump
PS3/xb360 -> PS4/xb1 ... well still big but no visually that big (just in some cases), but PS3/xb360 games did already (and still) look quite good
PS4/xb1 -> PS4 Pro/xb1x ... not that big
PS4 Pro/xb1x -> XXX ... well this get's interesting. Seems like console prices will rise or we get "minor" iterations.

60 fps also requires 2x GPU performance. Faster CPU alone is not enough.
Also it is not always 2xCPU performance needed to get from 30 -> 60fps. There can be calculations (like the world simulation etc.) that are independent from the fps of the game (or at least should be). Even physics simulations don't need to update with every frame or the AI don't need to think again every frame (sometimes they already update even more frequently) if it really wants to reach a certain point, just the model/position must be updated.

The 2xGPU part is much harder to reach, because it also means ~2x memory bandwidth needs etc.[/LEFT]
 
Last edited by a moderator:
It is easier to lower geometry detail and dynamically scale spatial resolution than to cut the complexity of all your gameplay scripts by a factor of two.

Also, swapping 30mm² CPU silicon with 30mm² GPU silicon is not equal cost-wise. The GPU part demands much more bandwidth, and the memory subsystem is where the bulk of the cost in a console lies (the DRAM itself, but also all the buffers, caches and I/Os on the main SOC that deal with memory). A CCX (four cores) requires about 20GB/s to function optimally, the same area of CUs would be around 100GB/s, adding more cost to your memory subsystem.

Cheers
 
  • Devs have been offloading stuff to the GPU for ages, but they still can't get to 60fps in many games. I doubt this will suddenly change, this isn't some magic bullet.
  • Sony needs to get to 60+fps for as many games as possible so they can have more VR support. The obvious choice is to have a much bigger focus on CPU this time.

No matter how much grunt the next system's gonna pack, everyone will still try and make the prettiest game humanly possible. Seems to be working out for them too, so that's probably not going to change any time soon. The 30 fps game is not gonna go away. Especially as the jumps in visual fidelity are gonna get disproprtionally smaller with every new hardware generation.

And I doubt VR will have much of an impact on that either. Most of the really big hitters (i.e. the cinematic, behind-the-shoulder blockbuster games) are poor fits for VR anyways.
 
This is a choice, they can scale back things until they have a rock solid 60 fps. If they do not, they choose to not get 60fps. As Insomniac wrote a few years back its 60FPS vs prettier pixels basically.

It's only a choice if you consider a marriage with no way out a choice. With enough CPU all you need is a dynamic scaling option and the end-user has a real choice.
 
You guarantee it? Okay.

My PC has a Devils Canyon CPU at 4.5 gHz (at least twice as fast as the X1X CPU) paired with 32GB of overclocked RAM and an SSD .... and a barely-faster-than-PS4 level GTX 680 (albeit at marginally overclocked 770 speeds). So you're wrong.

Okay, now you know you're wrong, how are you going to make good on this guarantee of yours? What's this guarantee of yours worth? What do I get?

You didn't prove anything. @Rangers did write some might claim otherwise in public forums, which is exactly what you did.

Besides, all you have is an overclocked Haswell from mid-2014 and the top-end nvidia GPU from mid-2012.
For all we know, you could have purchased the GTX 680 for $500 at release time and the CPU last week on ebay for $50 to replace the Core i3 you had before that.
And neither did you say what you use your PC for. I'd say probably not just gaming, otherwise you wouldn't have 32GB RAM. So your choice of components for a PC isn't really proof of what you would choose for a gaming console.


I do agree with @Rangers. Everyone but a tiny niche would choose a 14 TFLOPs console with higher-clocked 8-core Jaguars over a 10 TFLOPs with an 8-core Ryzen.
40% extra GPU power would make prettier screenshots and videos, thus the games would sell better.
The CPU is proportionally losing power and area to the GPU in gaming consoles and pretty much every evolution in HPC we've seen during the last couple of years points to that trend to continue.
No one has been coming up with amazing new and innovating ideas about how to use CPUs instead of GPUs for task X or Y to make them more efficient.
 
I'm thinking a 4 core/8 thread ryzen will be enough for next gen. It'll give the developers much better single threaded performance through both a stronger core and higher clock than Jaguar. The rest shpuld probably be spent on gpu, as customers are going to find it hard to distinguish between mid gen refresh and next gen as it is.
 
Everyone but a tiny niche would choose a 14 TFLOPs console with higher-clocked 8-core Jaguars over a 10 TFLOPs with an 8-core Ryzen.

How are you arriving at this trade-off :?:

Somehow an extra ~30mm^2 CCX @ 7nm is equivalent to +40% shader/tex, an appropriate increase in bandwidth & associated cost?
 
You didn't prove anything. @Rangers did write some might claim otherwise in public forums, which is exactly what you did.

Ah, so I'm lying.

Besides, all you have is an overclocked Haswell from mid-2014 and the top-end nvidia GPU from mid-2012.
For all we know, you could have purchased the GTX 680 for $500 at release time and the CPU last week on ebay for $50 to replace the Core i3 you had before that.
And neither did you say what you use your PC for. I'd say probably not just gaming, otherwise you wouldn't have 32GB RAM. So your choice of components for a PC isn't really proof of what you would choose for a gaming console.

I purchased the Haswell brand new, along with a brand new overclocking motherboard, when it was the top end quad core i7. I also bought my DDR3 new. First 16 GB, then another 16 GB so I could be sure of matching kits. I bought the GTX 680 second hand off ebay a year later.

My previous system was a 2500K @ 4.5 that died. It was paired with a 560 Ti.

I use it for not playing games at 20 fps.

And B3D. :s

I do agree with @Rangers. Everyone but a tiny niche would choose a 14 TFLOPs console with higher-clocked 8-core Jaguars over a 10 TFLOPs with an 8-core Ryzen.

The PS3 ended up doing alright against the 360. So did the Wii, actually ...

On PC you can see that this is absolutely not the case. That kind of uniform imbalance doesn't exist where customers have more choice, and a better understanding of the impact on their gaming.

40% extra GPU power would make prettier screenshots and videos, thus the games would sell better.
The CPU is proportionally losing power and area to the GPU in gaming consoles and pretty much every evolution in HPC we've seen during the last couple of years points to that trend to continue.
No one has been coming up with amazing new and innovating ideas about how to use CPUs instead of GPUs for task X or Y to make them more efficient.

You don't need to step away from increasingly large GPU's, and you don't need to come up with "amazing new ideas" for using a CPU. Demanding AI (lots of good FSMs), simulation, and solid frame rates are pretty solid reasons as it is.
 
Last edited:
How are you arriving at this trade-off :?:

Somehow an extra ~30mm^2 CCX @ 7nm is equivalent to +40% shader/tex, an appropriate increase in bandwidth & associated cost?

I think that's what Ranger's pulled out of his in an attempted show how disastrous a faster CPU would be.

There's an assumption that if you never make faster CPUs, there will never be games that people want to play that require faster CPUs.

All you need is a couple of sufficiently compelling games that excel or can only run on faster CPUs, and a paper FLOPs difference that people struggle to see on their screen will begin to lose its lustre.
 
Back
Top