Impact of the hypervisor on future console HW and game development

AleNo

Newcomer
I was reading the pros/cons of ESRAM thread and found the discussion regarding future hardware revisions of the XBox One interesting. Particularly considering future memory changes and cost reductions.

The comment about the XBox Game OS and hypervisor got me thinking (http://forum.beyond3d.com/showpost.php?p=1851448&postcount=521). If we assume para-virtualisation is being used (derived from Hyper V), this will (as far as I'm aware) abstract out all access to specific memory timings and ultimately 'simply' provide access to the memory address space and the features exposed by the graphics driver.

Does this abstraction give MS more flexibility in terms of future hardware revisions and ultimately the ability to to provide forward compatibility in next - next gen hardware?

As I'm not a GFX programmer I'm interested to understand the likely graphics 'costs' to having the hypervisor in terms of performance?

Also, over a 5-10 year horizon, does this give MS the ability to release an XBox 1.5/2.0 with additional processing capability with 100% compatibility to the XBox 1. If this is the case, then I can see a situation where MS has broken the traditional console cycle and can introduce new hardware (potentially through 2/3rd parties as well) that adds new pre-defined performance levels while retaining compatibility.

What are your thoughts?
 
I was reading the pros/cons of ESRAM thread and found the discussion regarding future hardware revisions of the XBox One interesting. Particularly considering future memory changes and cost reductions.

The comment about the XBox Game OS and hypervisor got me thinking (http://forum.beyond3d.com/showpost.php?p=1851448&postcount=521). If we assume para-virtualisation is being used (derived from Hyper V), this will (as far as I'm aware) abstract out all access to specific memory timings and ultimately 'simply' provide access to the memory address space and the features exposed by the graphics driver.

Does this abstraction give MS more flexibility in terms of future hardware revisions and ultimately the ability to to provide forward compatibility in next - next gen hardware?

As I'm not a GFX programmer I'm interested to understand the likely graphics 'costs' to having the hypervisor in terms of performance?

Also, over a 5-10 year horizon, does this give MS the ability to release an XBox 1.5/2.0 with additional processing capability with 100% compatibility to the XBox 1. If this is the case, then I can see a situation where MS has broken the traditional console cycle and can introduce new hardware (potentially through 2/3rd parties as well) that adds new pre-defined performance levels while retaining compatibility.

What are your thoughts?

I'm guessing not, the level of virtualisation going on with GameOS must by design expose elements of the underlying architecture that could compromise forward compatibility. Most obvious of these is the ESRAM itself, there are a wealth of ESRAM and Move engine specific commands as the ESRAM is not a cache and must be manually filled and emptied. Only if a future RAM tech was able to offer access speeds and latencies identical or faster than the ESRAM pool.

Then again if the next Xbox has HBM or any of the other fast RAM techs being worked on now perhaps all of these XB1 specific features and timing could be made to go away?
 
What an excellent first post! Welcome to Beyond 3D! Many of us would like answers/informed speculation about these questions as well.

I personally feel one of MS' goals with their chosen direction is forward/backward compatibility. I'm not sure why else they would pay the performance penalties that seem to be inherent with such a direction.
 
A hypervisor setup is not new for either platform at this point.
The previous consoles also used a hypervisor setup to isolate high-value parts of the platform from the software running on it.

The hypervisor should not be visible enough to significantly impact features or backwards compatibility.
The hypervisor's job in the Xbox One's setup isn't to intercept every instruction fetched by the CPU, API call, or every memory access. It maintains instead two separate OS installations, and it guards the high-value and low level system resources that arbitrary code in either domain could do bad things with.

Setting up the ESRAM and using it would entail using the required interfaces with the Game OS to set up pages with the appropriate properties and whatever other GPU commands are related to initialization. The hypervisor has indirect involvement in this since it has final say in managing the virtual memory system, in so far as it keeps memory allocations and operations that might try to modify the actual system from touching anything they shouldn't, but that's not something the game should be aware of or capable of messing with.

Once the pages and GPU are ready, the use of ESRAM is described as being handled in the same manner as other memory accesses, which the hypervisor generally shouldn't have to care about.

The more complicated three-part system does allow the console to evolve more quickly without risking interference with existing game software. The game's OS keeps things constant from the game's POV, and the virtualization setup maintains the necessary isolation and resource allocation that the game's OS works within. Changes on the application side or changes in the overall platform can happen more quickly.

The Xbox One's dropping of the online requirement changed some of the original assumptions as to how up to date the system OS would be relative to game software, although I'm not sure what the full import of that was beyond that it did add to the effort in reworking things in 2013.

It's hard to tease out how much is due to Microsoft's general software focus versus the architectural decisions to facilitate independent application and game evolution, but the Xbox One does seem to have had a more rapid application and feature updates curve since launch.
 
My understanding is the hypervisor setup on xbox one has a negligable impact on performance.

Great post, 3dilettante.
 
I guess by "performance penalty" I was referring to how some insinuate/imply that the XBone has more software overhead than other setups. Perhaps I'm misinterpreting the situation though.
 
On backward compatibility, I don't really see this to be important from now on, it's far better to be able to sell another copy of the same game, HD remaster or not, for additional profit.
 
On backward compatibility, I don't really see this to be important from now on, it's far better to be able to sell another copy of the same game, HD remaster or not, for additional profit.
Wrong thread. Whether BC should or shouldn't be sought is a subject for the 'importance of BC' thread in general console discussion. This thread is about how to implement BC on Xboxes going forwards.
 
Once the pages and GPU are ready, the use of ESRAM is described as being handled in the same manner as other memory accesses, which the hypervisor generally shouldn't have to care about.

We know GPU doesnt support full virtualization. MS has basically 2 choices:

1) make the Hypervisor deal with HSA for final paging.
2) make the System OS (whatever it is called) deal with it. But then, you would lose control over full memory on its exploitation, since AMD doesnt support trustzone-alike checks.
 
3dilettante - thanks for the great post.

You mentioned: "Setting up the ESRAM and using it would entail using the required interfaces with the Game OS to set up pages with the appropriate properties and whatever other GPU commands are related to initialization. The hypervisor has indirect involvement in this since it has final say in managing the virtual memory system, in so far as it keeps memory allocations and operations that might try to modify the actual system from touching anything they shouldn't, but that's not something the game should be aware of or capable of messing with."

If the hypervisor has trust domains that it must maintain integrity of, doesn't this by definition mean that it must intercept all memory writes or rely upon a MMU that is aware of the system virtualisation. This would still be an abstraction of the GFX hardware though.

Damn, I wish MS would give out more information about this part of the implementation as it's by far the most interested part as far as I'm concerned.
 
We know GPU doesnt support full virtualization. MS has basically 2 choices:

1) make the Hypervisor deal with HSA for final paging.
2) make the System OS (whatever it is called) deal with it. But then, you would lose control over full memory on its exploitation, since AMD doesnt support trustzone-alike checks.

I figured the hypervisor would allocate memory to the respective application and game operating systems. The game or application would interact with those. We already know each partition has reserved memory, and the philosophy for their partition is strong isolation.

I haven't seen a public disclosure as to whether there is a static allocation of ESRAM capacity, but we do know there is time slicing going on, since Microsoft is scaling the Kinect reserve back significantly.
There could be tweaks to the hardware to minimize what needs to be reinitialized, like having separate front ends. The rest would be context switching, draining the pipelines, and invalidating page mappings from the other partition.


If the hypervisor has trust domains that it must maintain integrity of, doesn't this by definition mean that it must intercept all memory writes or rely upon a MMU that is aware of the system virtualisation. This would still be an abstraction of the GFX hardware though.
The hypervisor wouldn't be intercepting all memory writes, particularly not those of the GPU.
It would be physically impossible to do so without slowing everything down by orders of magnitude, since interception and handling would happen on a CPU, and the GPU can trivially stomp all over the CPUs in memory traffic.
There's an IOMMU, and GCN has x86-like page tables.
 
I figured the hypervisor would allocate memory to the respective application and game operating systems.
Yep, it must be the Hypervisor, for security reasons. But I do not think it is static, since I can hardly imagine win8&kinect with a fixed GPU buffer, would be a waste.
 
Per definition the function of the OS is to separate the hardware from the software,and make possible the update the hardware without the need to change all software.

The hypervisor is just an other step , not something ground breaking.

Additional ,the hypervisor function this case is to keep a high level of security / system rule compliance , not to make it easier to port the software.

So the hypervisor light weight, to fit into the cache , it run in ring 0 (-1),it handle all code validation / memory allocation, but nothing more than really necessary, because every additional function create another security risk.
 
Back
Top