Dominik D
Regular
This has been answered a bunch of times already, but I'm waiting for code pull from P4, so whatever.The ps4 obviously has the 8 core AMD 1.6 ghz jaguar. we can overclock to 2.75 GHZ. the CPU is basically 2 mobile CPUs on one module. tablet processors. they're pretty weak if you compare it to the PS3 CPU.
Theoretical peek. This assumes you never branch in your code and just crunch numbers. Which is not the case if you run game logic. It may be closer to truth if you're doing physics, but rarely otherwise.PS3 cpu can do 230 GFLOPS with the CPU
So half the time (or more) you're stalling, waiting for CPU to do something else than it assumed would be required due to branch misprediction. Or you tweak your code like hell and make it as branchles as possible, invest hundreds of manhours to get anything out of that theoretical peek.
Because, you see, CPUs do more than just calculate. They access data, perform IO from disk, decide and branch, synchronize between cores. So...
...even if you have third of the peek performance of CELL, simply by having CPU that's smarter (brainier as Intel would put it) you get a lot of that back. Almost for free, w/o countless hours of optimizations (which you still could and would do).while PS4/XONE have like 100 GFLOPS. the X360 had 77 GFLOPS
But all that's irrelevant (mostly). PS4 and Xbone should pretty well handle diverse workload on the GPU (that means rendering + compute). FLOPS on the CPU would be less relevant and the ability to run branchy branchy code would become more important. Tools' maturity would also help (as opposite to immaturity hindering in the CELL case, early on at least). The real gains this time around could come from smarter graphics driver stacks. Which PS4 probably already has in some form and Xbone has/will get with smarter scheduler than the Windows 8.1 one, which I assume is something it has (or had at launch).
tl;dr
If you don't understand how numbers can be applied to use, don't quote them.