Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
It *does* seem plausible that a game gets non interfered access to the resources it uses though. Nobody likes framedrops because of a background app fetching data. The HDD has more capacity so perhaps devs prefer it over using a, somewhat limiting, 8GB Flash for gamedata storage. A game already has a lot of memory available, how much gain in performance could be achieved by using the flash as additional storage?

I agree that it seems like a Bad Idea(tm) to ever have the OS or some app steal hardware resources away from a game in the middle of play. Trouble is, it seems kinda like MS has already committed to something like that happening. I'm speaking of that Game DVR thing. If you are continuously recording, you must be accessing the HDD. Unless, you've got the game recording to RAM, which is of course an even more precious resource than the hard disk. You certainly can't use the flash as a video buffer, since the continuous writes would eat it alive.

Maybe the video gets written to a buffer over in the OS side of the memory. That means there's not as much memory left over to keep non-running apps in memory, which in turn makes keeping those apps "installed" in the flash a better option. That might make sense.

As for the, "How much can a small flash cache gain you in terms of apparent HDD performance" question, I have no idea. It seems like it might help, if developers could isolate "troublesome" data and get it moved into flash at install time. For instance, they might notice that the game hitches every time the player leaves the forest and takes in a glorious mountain vista. They could then tag those mountain textures as ones that need to be installed to flash. Would that help much? I dunno.

Probably doesn't matter since it would kill the flash it you switched games too often.
 
I agree it's probably not typical, but that doesn't really matter. The hardware needs to not destroy itself, even when subjected to non-typical usage. In a household full of kids, I can easily see several-to-many games being played throughout a normal, screaming-filled day. Allow me to make up some numbers:

Flash can survive 3000 write cycles
Some customers trigger 5 "game transitions" per day
Their flash cache stops working in less than two years
Class action lawsuit

I'm not positive those numbers are plausible, but I think I'm in the ballpark at least. Even if I'm being ridiculously pessimistic, it's not close to being acceptable. In my house, I'd be fine. I go days to weeks between games, on average. But MS dares not assume that's true for everyone.

How would a 360 S handle that case where its using 4 GB of flash as a hdd?
 
I agree it's probably not typical, but that doesn't really matter. The hardware needs to not destroy itself, even when subjected to non-typical usage. In a household full of kids, I can easily see several-to-many games being played throughout a normal, screaming-filled day. Allow me to make up some numbers:

Flash can survive 3000 write cycles
Some customers trigger 5 "game transitions" per day
Their flash cache stops working in less than two years
Class action lawsuit

I'm not positive those numbers are plausible, but I think I'm in the ballpark at least. Even if I'm being ridiculously pessimistic, it's not close to being acceptable. In my house, I'd be fine. I go days to weeks between games, on average. But MS dares not assume that's true for everyone.


Err, what about SSD's? And as mentioned, the 4GB 360 and 12GB (flash) PS3?

Also can overprovision to extend life.
 
I agree that it seems like a Bad Idea(tm) to ever have the OS or some app steal hardware resources away from a game in the middle of play. Trouble is, it seems kinda like MS has already committed to something like that happening.
Yup. Based on Xbox One being able to snap in other apps onscreen I think we can assume that as well as CPU cores being reserved for the OS and apps, that some of 12 APU CUs will also be reserved for the OS and apps. In addition, any HDD usage will need to be factored in.

There are two obvious options here: 1) you give the game access to all unused [hardware] resources but the OS gets priority when it needs it, so devs manage varying a game with varying resources. 2) you reserve certain hardware and I/O resources for OS/app usage and give devs a guaranteed CPU/GPU/HDD/IO bandwidth based on the hardware in the system and non-game resource reservation.
 
Yup. Based on Xbox One being able to snap in other apps onscreen I think we can assume that as well as CPU cores being reserved for the OS and apps, that some of 12 APU CUs will also be reserved for the OS and apps. In addition, any HDD usage will need to be factored in.

There are two obvious options here: 1) you give the game access to all unused [hardware] resources but the OS gets priority when it needs it, so devs manage varying a game with varying resources. 2) you reserve certain hardware and I/O resources for OS/app usage and give devs a guaranteed CPU/GPU/HDD/IO bandwidth based on the hardware in the system and non-game resource reservation.

Hardware virtualization is a friend here. Looks like the "System" gets 2 CPU cores and a CU, probably.

http://kotaku.com/the-five-possible-states-of-xbox-one-games-are-strangel-509597078

Btw, the virtual hardware partitioning has been known about for a very long time, at least as long as we've known about the VM model they've gone with. They use a hypervisor, so it's pretty much how it is.
 
Hardware virtualization is a friend here. Looks like the "System" gets 2 CPU cores and a CU, probably.
Yep, we've known about it for a while and that's the benefit of it, that you can give the gameOS a specific resource allocation.
A game doesn't need to worry about the appOS interrupting/stealing any.
 
Hardware virtualization is a friend here. Looks like the "System" gets 2 CPU cores and a CU, probably.

http://kotaku.com/the-five-possible-states-of-xbox-one-games-are-strangel-509597078

Btw, the virtual hardware partitioning has been known about for a very long time, at least as long as we've known about the VM model they've gone with. They use a hypervisor, so it's pretty much how it is.
Not really, because we know next to nothing about the implementation.

Now I know next to nothing about Microsoft's virtualization hypervisor, nor the two virtualized environments in Xbox One, what I do know a lot about is our inhouse virtualization abstraction that I work with in my day job, which spans server farms in the hundreds of thousands across many geographic locations.

Virtualization can be incredibly flexible and the devil is in the implementation. We have virtualization solutions and client operating systems that are able to adapt, in realtime, to virtual RAM and processor configurations changing every x microseconds. The goal being to keep the server farm as close to 100% utilisation, as it technically possible. We're not Google and don't have their YoY hardware budgets, but I'd put serious money on our servers having better utilisation of hardware.

While there are few applications where it's practical to require environments (virtualized operating systems) able to deal with adaptive memory virtualisation, i.e. giving memory resource and taking it away in realtime, it's usual for me to write code that is highly parallelised but also able to deal with a virtual environment that has 40 virtual cores now but only 8 virtual cores in 500ms but perhaps 160 in 4,000ms. The parallelisation of tasks generally being the steer for the hypervisor to re-allocate resources.

Admittedly, we run very little code in virtualized Windows environments but Micorsoft have full control over the Xbox One and could be doing some very clever stuff.
 
It's a stripped down version of Microsoft Hyper-V Server. Also optimized for the exact VM model they're running (One "System" VM and one "Game" VM).
 
It's a stripped down version of Microsoft Hyper-V Server. Also optimized for the exact VM model they're running (One "System" VM and one "Game" VM).
I've read the "stripped down" comment in the gaming press, I've not read Microsoft saying this. I'm generally dubious of journalists, who obviously don't understand virtualisation technologies, explain it for the "lay person" then get it wrong ;-)

It may be fairly basic as described. it may not.
 
I've read the "stripped down" comment in the gaming press, I've not read Microsoft saying this. I'm generally dubious of journalists, who obviously don't understand virtualisation technologies, explain it for the "lay person" then get it wrong ;-)

It may be fairly basic as described. it may not.

They said it in their architecture panel after the reveal. I don't know the names of the guys. They said they took their virtualization technology from the server-side business (Microsoft Hyper-V Server) and stripped it down and optimized it because they know exactly how many (two) and which operating systems ("Game", "System") they are running as VMs. They really didn't give many details. That's about as specific as it got.
 
They said it in their architecture panel after the reveal. I don't know the names of the guys. They said they took their virtualization technology from the server-side business (Microsoft Hyper-V Server) and stripped it down and optimized it because they know exactly how many (two) and which operating systems ("Game", "System") they are running as VMs. They really didn't give many details. That's about as specific as it got.
Exactly. "Stripped down" can easily mean "removed the code we don't need" and "optimise" probably means "added and changed it as we needed for the product". Hyper-V in Xbox One could bear very little resemblance to the commercial product.

I'm not suggesting secret virtualization sauce, just saying that we really don't know how it differs from commercial offerings. It could certainly be a lot more flexible/powerful.
 
I've read the "stripped down" comment in the gaming press, I've not read Microsoft saying this. I'm generally dubious of journalists, who obviously don't understand virtualisation technologies, explain it for the "lay person" then get it wrong ;-)

It may be fairly basic as described. it may not.
Then told MS did say it.
Exactly. "Stripped down" can easily mean "removed the code we don't need" and "optimise" probably means "added and changed it as we needed for the product". Hyper-V in Xbox One could bear very little resemblance to the commercial product.

I'm not suggesting secret virtualization sauce, just saying that we really don't know how it differs from commercial offerings. It could certainly be a lot more flexible/powerful.
Not being a dick, but what do you mean then?
I'm just not sure what your actually saying.

What would you like to know?
They also said why they did it if that would help.
 
What changes to the commercial Hyper-V product did Microsoft make for the Xbox One implementation.

Their motivation doesn't need explanation, it's apparent.
Cool, I just thought if you knew their motivation that may have been enough to answer your outstanding questions to some degree.

But yea we don't know the specific changes made to HYPER-V.
 
Cool, I just thought if you knew their motivation that may have been enough to answer your outstanding questions to some degree.
Much like aspects of the Windows kernel and DirectX, you can safely assume that Microsoft will be motivated to take the useful parts of other code and adapt them to their new consumer product.

But yea we don't know the specific changes made to HYPER-V.
Exactly. I think many folk's experience of virtualization software is probably commercial Hyper-V, VMWare and Parallel's products but these can be really quite limited compared what's actually in use in a lot of places. Now it may be that the XBox One Hyper-V implementation does nothing, or little, more than commercial product but I wouldn't want to assume that.

I think that for Microsoft, Xbox One is a cornerstone product, much like PlayStation 3 was for Sony and where it wasn't just about games but levering other technologies (Blu-ray, Cell for Sony) in their box to deliver a compelling product. The virtualization technologies I use every day - but aren't promoted by commercial virtualization software - aren't complicated, they are merely a niche requirement. Xbox One looks like it would benefit from niche requirements. Would you rather have 10 CUs available for games all the time, or 11-12 CUs available for games 98% of the time using virtualized hardware resource load-balancing between the game OS and app OS?

No need to phone a friend on that one ;-)
 
Not really, because we know next to nothing about the implementation.

Now I know next to nothing about Microsoft's virtualization hypervisor, nor the two virtualized environments in Xbox One, what I do know a lot about is our inhouse virtualization abstraction that I work with in my day job, which spans server farms in the hundreds of thousands across many geographic locations.

Virtualization can be incredibly flexible and the devil is in the implementation. We have virtualization solutions and client operating systems that are able to adapt, in realtime, to virtual RAM and processor configurations changing every x microseconds. The goal being to keep the server farm as close to 100% utilisation, as it technically possible. We're not Google and don't have their YoY hardware budgets, but I'd put serious money on our servers having better utilisation of hardware.

While there are few applications where it's practical to require environments (virtualized operating systems) able to deal with adaptive memory virtualisation, i.e. giving memory resource and taking it away in realtime, it's usual for me to write code that is highly parallelised but also able to deal with a virtual environment that has 40 virtual cores now but only 8 virtual cores in 500ms but perhaps 160 in 4,000ms. The parallelisation of tasks generally being the steer for the hypervisor to re-allocate resources.

Admittedly, we run very little code in virtualized Windows environments but Micorsoft have full control over the Xbox One and could be doing some very clever stuff.
They are doing clever stuff, but none of it is what you describe. The Hypervisor can remove CPU cores from a game and fold them onto other cores, but there is no changing memory. The System OS always has at least two cores and exactly 2GB of RAM. Changes to the normal Hyper-V include removing code not used, adding drivers that are hyper-v aware and can take advantage of it, a way for processes in the two VMs to communicate, and many others.
 
They are doing clever stuff, but none of it is what you describe. The Hypervisor can remove CPU cores from a game and fold them onto other cores, but there is no changing memory. The System OS always has at least two cores and exactly 2GB of RAM. Changes to the normal Hyper-V include removing code not used, adding drivers that are hyper-v aware and can take advantage of it, a way for processes in the two VMs to communicate, and many others.

So up to 6gb for games??
 
How would a 360 S handle that case where its using 4 GB of flash as a hdd?

They "handle it" by assuming that a customer is not going to delete and then re-download 4 GB of content multiple times a day. The use cases for the the 4 GB of storage in the 360 bear very little resemblance to those of a hypothetical "game data cache" on the XB1. In the latter, the cache would need to be emptied and re-filled every time you decided to play a different game. (This assumes that the flash can only hold a single game's cache at a time. I think that's a plausible assumption given the limited size of the flash as compared to the size of next gen games.)
 
Status
Not open for further replies.
Back
Top