Agreed. Though the bar graph only stops at 10Gb/s CPU load, we don't know what the behaviour will be if the CPU load becomes >10GB/s, ie 30GB/s. The thing I do not know is: What is reasonable bandwidth usage for a CPU bound game? And could we reverse engineer this number if we take 900P as a resolution and work backwards to how much bandwidth that would require from (I assume ROPS) the GPU side of things - and see if we can ballpark the theoretical load caused by the CPU? If the number is too large, then we can remove this as a possibility? But if the number seems within reason then maybe we can leave it as passable?Well let's not get carried away. Sony's slide from last August's GDC confirms a disproportionate amount GPU bandwidth is lost as CPU bandwidth increased but we're looking at a drop from 135Gb/sec to 100Gb/sec - I assume the metrics are accurate, otherwise why have them.
150Mhz doesn't sound a lot but it's 150Mhz across six cores (available to games), or 900Mhz in total, which is the IPC equivalent to another half a core compared to Sony's IPC - assuming they're roughly equivalent.
The math is actually a little blurrier than that. We are assuming the Xbox is 150Mhz faster than the PS4 when Sony hasn't released the number for them either. But assuming that it is 150Mhz faster, it's only the slowest process (or longest) that would overall improve the time for X1, so likely at most only 150Mhz faster, unless all 6 cores are equally loaded and end at the same time. Cores working on already faster items will just complete faster and go idle I assume.