Why? Why not just Ryzen cores if they're putting them in?maybe an hetereogenous approach will be used... 8 jaguar cores plus maybe 8 ryzen cores.
Why? Why not just Ryzen cores if they're putting them in?maybe an hetereogenous approach will be used... 8 jaguar cores plus maybe 8 ryzen cores.
A cheaper alternative to Ryzen may be reworking Jaguar cores. IIRC AMD added additional pipeline stages when they moved from Bobcat to Jaguar in order to facilitate higher frequencies (originally 2GHz on the desktop, now 2.3GHz in X1X). Adding even more stages, while moving to a 3-wide design may be all that's required to feed beefier GPUs. Imagine a 3GHz Jaguar with 10-15% higher IPC. That's over 50% more CPU.
I'd really rather not imagine that. There's no point.
Moving from one node to another node is not that difficult (companies basically off shore that type of work nowadays) adding a new decoder and additional pipeline stages is basically redoing the CPU design. Why waste the time and resources when their is a good scalable design available?
I guess the point isn't CPU usage, but CPU impact. PS4's bandwidth can get trounced by CPU access
I don't think so. Unified memory is the most "effective" for the long run. Just because it's up to the developer how to use the resources. Actually I'm a fan of seperate pools, but than your chip must have a second memory interface which also increases the cost for the chip.yea sniffy... I think that too... Having a shared memory with GPU a too strong CPU may be not fully used becouse it "steals" bandwidth to the GPU... but perhaps the unified memory approach may change
The memory contention problem of the PS4 was worse than those 15%. If the CPU needed 10GB/s the bandwidth was reduced by ~30GB/s according to sonys own presentation. So a beefier CPU really need much more memory bandwidth to compensate.What's your definition of trounced ? The maximum impact I've seen is ~20GB/s of the 140GB/s total bandwith or ~15%.
Cheers
What's your definition of trounced ? The maximum impact I've seen is ~20GB/s of the 140GB/s total bandwith or ~15%.
Down from ~140 for GPU to ~100 once the CPU has taken 10 GB/s. That's about a 30% drop.
That 110 GB/s includes the CPUs BW. It's 100 GB/s for GPU in this rough graph. Furthermore, the RAM is rated 176 GB/s, no? So from the theoretical peak BW available to GPU, we see a 55% reduction to what's actually available to the GPU when the CPU is busy. I can't recall the specifics of that 140 GB/s figure.I see from ~135 to 110GB/s, ~19%
Your 55% is mixing apples and oranges.That 110 GB/s includes the CPUs BW. It's 100 GB/s for GPU in this rough graph. Furthermore, the RAM is rated 176 GB/s, no? So from the theoretical peak BW available to GPU, we see a 55% reduction to what's actually available to the GPU when the CPU is busy. I can't recall the specifics of that 140 GB/s figure.