I think this simply boils down to the resource overhead associated with core functions of the Xbox One that aren't getting quite as much attention, unless of course the PS4's CPU truly is clocked higher than we all thought, but I don't think things are quite so simple. I see little reason for why Sony wouldn't advertise such a thing, and, well, they technically already told us their CPU speed in their developer documentation, and that lists the CPU at 1.6GHZ.
The Xbox One is running the equivalent of up to 3 operating systems, one of which plays the role of facilitating the interoperability between the game based OS and the app based OS. The hypervisor plays a more important role than just that, however, or else there'd be no need to even bother with a hypervisor in the first place. It's an important part of what keeps the two parts (Games side and System OS functions side) separated and not interfering with one another in the fashion that Microsoft desired, while also
allowing the system to operate and multitask as it does. Creating that separation is at the very core of their design philosophy for the system.
Andrew Goossen: I'll jump in on that one. Like Nick said there's a bunch of engineering that had to be done around the hardware but the software has also been a key aspect in the virtualisation. We had a number of requirements on the software side which go back to the hardware. To answer your question Richard, from the very beginning the virtualisation concept drove an awful lot of our design. We knew from the very beginning that we did want to have this notion of this rich environment that could be running concurrently with the title. It was very important for us based on what we learned with the Xbox 360 that we go and construct this system that would disturb the title - the game - in the least bit possible and so to give as varnished an experience on the game side as possible but also to innovate on either side of that virtual machine boundary.
We can do things like update the operating system on the system side of things while retaining very good compatibility with the portion running on the titles, so we're not breaking back-compat with titles because titles have their own entire operating system that ships with the game. Conversely it also allows us to innovate to a great extent on the title side as well. With the architecture, from SDK to SDK release as an example we can completely rewrite our operating system memory manager for both the CPU and the GPU, which is not something you can do without virtualisation. It drove a number of key areas... Nick talked about the page tables. Some of the new things we have done - the GPU does have two layers of page tables for virtualisation. I think this is actually the first big consumer application of a GPU that's running virtualised. We wanted virtualisation to have that isolation, that performance. But we could not go and impact performance on the title.
We constructed virtualisation in such a way that it doesn't have any overhead cost for graphics other than for interrupts. We've contrived to do everything we can to avoid interrupts... We only do two per frame. We had to make significant changes in the hardware and the software to accomplish this. We have hardware overlays where we give two layers to the title and one layer to the system and the title can render completely asynchronously and have them presented completely asynchronously to what's going on system-side.
System-side it's all integrated with the Windows desktop manager but the title can be updating even if there's a glitch - like the scheduler on the Windows system side going slower... we did an awful lot of work on the virtualisation aspect to drive that and you'll also find that running multiple system drove a lot of our other systems. We knew we wanted to be 8GB and that drove a lot of the design around our memory system as well.
As such, with both CPUs more or less being identical, I imagine it would take quite a bit more than just a 150MHZ overclock for the Xbox One CPU to both overcome its higher by default processing requirements or resource overhead, while also expecting it to completely outperform the PS4 CPU in game specific tasks. We can't forget that the PS4's CPU is every bit as capable as the Xbox One CPU, with or without an overclock. So, regardless of whether or not the PS4 CPU is 1.6GHZ (and I suspect it absolutely is), it simply doesn't carry the same weight as the Xbox One. That's how I suspect a 1.6GHZ CPU is outperforming a 1.75GHZ CPU. The PS4 hardware doesn't have to deal with the associated overhead of running a hypervisor, which is why when Microsoft says that there's 2 cores reserved for the OS and system functions, that may not exactly be the whole truth of it, even though in traditional terms it is. From my understanding of the way the Xbox One is structured, it shouldn't be the case that the hypervisor is somehow only abstracting the 2 CPU cores reserved for the OS and system functions. If that's what was taking place, then having the hypervisor in the first place would probably be meaningless. In other words, all 8 CPU cores are having to answer to that hypervisor one way or the other, and perhaps this test gives us our first real insight into what kind of cost overhead is associated with that process. Because, realistically, under a more traditional console resource reservation without hypervisor abstraction, there should be absolutely zero reason for any one of the cores (cores reserved for game specific tacks) to be clocked at 1.75GHZ and somehow inexplicably still get outperformed by an identical 1.6GHZ core. There's only two explanations left that would explain this. Either the PS4 CPU isn't 1.6GHZ at all, or we're getting an idea of what kinds of costs are associated with the Xbox One operating through a hypervisor abstraction.
Sure, the testing methodology for how they arrived at their figures is important, but I suspect that the test was run properly, and the results are simply what they are. I know they said they went out of their way to avoid any overhead for graphics, but things won't necessarily always go the way they planned. Also good is that the CPU isn't entirely burdened with all the responsibility and has help from other hardware they put in the system, and they can always make further optimizations further down the line.