Could next gen consoles focus mainly on CPU?

A cheaper alternative to Ryzen may be reworking Jaguar cores. IIRC AMD added additional pipeline stages when they moved from Bobcat to Jaguar in order to facilitate higher frequencies (originally 2GHz on the desktop, now 2.3GHz in X1X). Adding even more stages, while moving to a 3-wide design may be all that's required to feed beefier GPUs. Imagine a 3GHz Jaguar with 10-15% higher IPC. That's over 50% more CPU.
 
A cheaper alternative to Ryzen may be reworking Jaguar cores. IIRC AMD added additional pipeline stages when they moved from Bobcat to Jaguar in order to facilitate higher frequencies (originally 2GHz on the desktop, now 2.3GHz in X1X). Adding even more stages, while moving to a 3-wide design may be all that's required to feed beefier GPUs. Imagine a 3GHz Jaguar with 10-15% higher IPC. That's over 50% more CPU.

I'd really rather not imagine that. There's no point.

Moving from one node to another node is not that difficult (companies basically off shore that type of work nowadays) adding a new decoder and additional pipeline stages is basically redoing the CPU design. Why waste the time and resources when their is a good scalable design available?
 
I'd really rather not imagine that. There's no point.

Moving from one node to another node is not that difficult (companies basically off shore that type of work nowadays) adding a new decoder and additional pipeline stages is basically redoing the CPU design. Why waste the time and resources when their is a good scalable design available?

I’d much prefer Ryzen as well but to play devils advociate, perhaps Sony/MS believe that a beefier GPU is an easier sell. The Pro and X were marketed using GPU TFLOPS and 4K/HDR. Prettier graphics may simply resonate more with the consumer than advanced AI/NPC counts. It’d probably demo better at a show like e3 at least.

Improved Jaguars could offer acceptable performance while using less area, less power that must be shared with the GPU, all the while possibly offering the benefit of easier back compat. If you look at the X and Pro there’s a clear trend. Massively large GPU with just enough CPU to feed it. Do everything possible to limit CPU overhead and enable the offloading of tasks to the GPU.

I think it’s working quite well for this generation of consoles tbh. Sony and MS may feel the same.
 
What’s the next “best in class” CPU? Imagine whatever that will be and take whatever scraps get thrown to the trash heap because of redundancy failure and that’s what a console will get.
 
yea sniffy... I think that too... Having a shared memory with GPU a too strong CPU may be not fully used becouse it "steals" bandwidth to the GPU... but perhaps the unified memory approach may change
 
The amount of bandwidth needed by the CPU is miniscule compared to the GPU.

A Zen core has roughly three times the single thread performance of a Jaguar core. Comparing die area of 8 Jaguar cores to 8 Zen cores is thus misleading, - the same goes for power. You'd have to have 16 or 24 Jaguar cores to get the same throughput as eight Zen cores.

Even if your sea of Jag cores matches the throughput of Zen cores, you still have to contend with the much lower single thread performance. That might not matter for titles built specifically for the platform, but it will matter for cross platform titles which use standard game engines. Your Jag based console is just one Player Unknown Battlegrounds-like sleeper hit away from being perceived as "last gen".

Cheers
 
I guess the point isn't CPU usage, but CPU impact. PS4's bandwidth can get trounced by CPU access. When considering super-high bandwidth solutions, is there any option particularly strong/weak to access contention? Could HBM see massive latency or bandwidth penalties if GPU and CPU were making demands, for example?
 
yea sniffy... I think that too... Having a shared memory with GPU a too strong CPU may be not fully used becouse it "steals" bandwidth to the GPU... but perhaps the unified memory approach may change
I don't think so. Unified memory is the most "effective" for the long run. Just because it's up to the developer how to use the resources. Actually I'm a fan of seperate pools, but than your chip must have a second memory interface which also increases the cost for the chip.
Well, I think I would prefer larger caches and one pool of memory, which also should be cheaper. Only if we have the situation like the xb360, when the memory is so much faster than the main memory, I could imagine 2 memory pools would be a good solution.
HBM on the other hand ... still have not seen something that really get's better because of HBM. The current vega cards like fiji have other problems than memory bandwidth and GDDR5x can also be a problem for the CPU like we see with the current 1080TI cards and their compute results.
I really hope GDDR6 memory is available soon and can be used for mass production with the next gen consoles. And I really hope they are not any worse for the CPU than GDDR5.

What's your definition of trounced ? The maximum impact I've seen is ~20GB/s of the 140GB/s total bandwith or ~15%.

Cheers
The memory contention problem of the PS4 was worse than those 15%. If the CPU needed 10GB/s the bandwidth was reduced by ~30GB/s according to sonys own presentation. So a beefier CPU really need much more memory bandwidth to compensate.
 
What's your definition of trounced ? The maximum impact I've seen is ~20GB/s of the 140GB/s total bandwith or ~15%.
PS4-GPU-Bandwidth-140-not-176.png

Down from ~140 for GPU to ~100 once the CPU has taken 10 GB/s. That's about a 30% drop.
 
if you put 16 bettered Jaguars (maybe at 3.0 ghz) you have to buy on PC realm a 16 thrad CPU like Ryzen to run the SAME game... That means more sells of new CPUs for AMD and INTEL....
 
I see from ~135 to 110GB/s, ~19%
That 110 GB/s includes the CPUs BW. It's 100 GB/s for GPU in this rough graph. Furthermore, the RAM is rated 176 GB/s, no? So from the theoretical peak BW available to GPU, we see a 55% reduction to what's actually available to the GPU when the CPU is busy. I can't recall the specifics of that 140 GB/s figure.
 
wich is really a LOT... Interesting to know if either GDDR6 or HBM2 can improve that... As I know even in 2013 for Sony and Ms was possible to use much stronger CPUs... But they choose Jaguars... Why ?
 
AMD didn't have an APU with anything better than Jaguar. A stronger CPU would have meant discrete CPU and GPUs.
 
I lack a lot of technical insights, but with infinity fabric moving to PCIe4 (or even 5 depending on timings) and EMIB-like interconnections, can we expect more budget dedicated to a separate cpu?
Cheaper than a soc, more cache, more cores, less bw contention.
 
That 110 GB/s includes the CPUs BW. It's 100 GB/s for GPU in this rough graph. Furthermore, the RAM is rated 176 GB/s, no? So from the theoretical peak BW available to GPU, we see a 55% reduction to what's actually available to the GPU when the CPU is busy. I can't recall the specifics of that 140 GB/s figure.
Your 55% is mixing apples and oranges.

You can't get an interface specs bandwidth in a real world usage. Unless all you do is reading the entire chip sequentially.

You would get only a few % performance with random 1 byte read because the max clock without dead cycles depends on the prefetch size. And there are also penalties every time it switches bank or change read/write direction.

You'd get the same thing on any memory. Gddr6 manages to increase the prefetch while keeping data granularity the same (256bits just like gddr5) so there's that.
 
Back
Top