Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
This also reminds me of what Lottes was saying:
http://societyandreligion.com/ps4-won-xbox-720-battle-windows-8/

My guess is that the real reason for 8GB of memory is because this box is a DVR which actually runs “Windows” (which requires a GB or two or three of “overhead”), but like Windows RT (Windows on ARM) only exposes a non-desktop UI to the user.

There are a bunch of reasons they might ditch the real-time console OS, one being that if they don’t provide low level access to developers, that it might enable a faster refresh on backwards compatible hardware. In theory the developer just targets the box like it was a special DX11 [DirectX 11] “PC” with a few extra changes like hints for surfaces which should go in ESRAM, then on the next refresh hardware, all prior games just get better FPS or resolution or AA.

Of course if they do that, then it is just another PC, just lower performance, with all the latency baggage, and lack of low level magic which makes 1st party games stand out and sell the platform.
 
Last edited by a moderator:
So I wonder what the performance impact of running VM's is versus letting the game have direct access to the hardware?
 
*AHEM* This not a thread for PS4/Orbis nor is it a VS thread. Keep those items out of this discussion.
 
I thought 5GB is enough when you have only DDR3, but now PS4 is 8GB GDDR5 I'm not so sure whether it'll be an issue for them as time goes on.

Devs definitely don't like it though, they were apparently quite vocal about having access to only a 5GB, 6 core machine


I honestly thought MS consulted the devs so to avoid this kid of situation.
Thanks anyway for the insight.
 
So I wonder what the performance impact of running VM's is versus letting the game have direct access to the hardware?
I wonder if if it that much of an issue. If there is something MSFT should do well it is that kind of things / they have a pretty good expertise to take that kind of decision.

If I compare to what they were saying to devs in some presentations wrt the use of SIMD units ie use those type of instructions vs more hand made solution as the compiler does a really good job with, or if I look at the presentation A.Lauritzen linked a while ago about ISPC, I see cases where actually the compiler can do a better job than a programmer using intrinsic/assembly, pretty much all time it does as well at least.

So I wonder if the same could apply and ultimately the software (still assuming good coding practices) could ended in most case do as well or better than the programmers, while it would be a win in productivity.

I've been wondering about something, I know really few about virtual machine sorry if it is nonsensical, so MSFT could present the devs with a given number of cores (Interference hints at six) and then the hypervisor will do its things and map up a virtual core to a physical one.
Actually what I wonder about is that, I don't know how advanced hypervisors are but could it be possible for example to present the devs with more (virtual) cores than the number of cores available physically, somehow like with hardware threads, presenting guideline, you can have that many high priority, medium priority, etc threads and have the hypervisor to map dynamically (may be in a job stealing fashion), whatever thread to a core to improve hardware utilization?

Going further down that line, as I think that could make more sense, could MSFT pretty doesn't not present cores at all? I mean the physical cores are completely hidden in a "black box" (virtual machine) and devs deal with it mostly as a single "core" /monolithic entity, they deal with tasks/threads (not hardware thread one obviously) setting priority and affinity (/whatever helps the hypervisor) and then the hypervisor is responsible to mapping to physical resources. I wonder because if the system is visualized why present virtual cores at all?
It would make more sense to go straight to task based model (fine grained multi-threading) and then have the hypervisor to do its stuffs, no?
 
I've been wondering about something, I know really few about virtual machine sorry if it is nonsensical, so MSFT could present the devs with a given number of cores (Interference hints at six) and then the hypervisor will do its things and map up a virtual core to a physical one.
Actually what I wonder about is that, I don't know how advanced hypervisors are but could it be possible for example to present the devs with more (virtual) cores than the number of cores available physically, somehow like with hardware threads, presenting guideline, you can have that many high priority, medium priority, etc threads and have the hypervisor to map dynamically (may be in a job stealing fashion), whatever thread to a core to improve hardware utilization?

Going further down that line, as I think that could make more sense, could MSFT pretty doesn't not present cores at all? I mean the physical cores are completely hidden in a "black box" (virtual machine) and devs deal with it mostly as a single "core" /monolithic entity, they deal with tasks/threads (not hardware thread one obviously) setting priority and affinity (/whatever helps the hypervisor) and then the hypervisor is responsible to mapping to physical resources. I wonder because if the system is visualized why present virtual cores at all?
It would make more sense to go straight to task based model (fine grained multi-threading) and then have the hypervisor to do its stuffs, no?

Well, the one thing a VM would do is make it really easy to implement some of the rumored suspend/resume features. You could quite easily bounce between different games (being to pick up right where you left off in each game) by saving and loading up different machine states.
 
Well, the one thing a VM would do is make it really easy to implement some of the rumored suspend/resume features. You could quite easily bounce between different games (being to pick up right where you left off in each game) by saving and loading up different machine states.
Thanks for the info ;)

Actually I've been searching some literacy on the matter, my idea sounds a bit like what those guys are researching:
http://static.usenix.org/event/lisa10/tech/full_papers/Turner.pdf
 
Well, the one thing a VM would do is make it really easy to implement some of the rumored suspend/resume features. You could quite easily bounce between different games (being to pick up right where you left off in each game) by saving and loading up different machine states.

Good luck context switching 5+GB data. :/

While suspending and resuming VMs are extremely easy, I wouldn't consider it a significant benefit since even lowly smartphones are very good at it with simple messaging, and that is without rigorous console like software certification.

I'm hesitant to state the obvious but the real objective (and benefit) seems to be security related.
 
Good luck context switching 5+GB data. :/

While suspending and resuming VMs are extremely easy, I wouldn't consider it a significant benefit since even lowly smartphones are very good at it with simple messaging, and that is without rigorous console like software certification.

Smartphone apps aren't really comparable to console games.

And I have a 3GB VM set up to run WinXP. Loading and saving snapshots isn't instantaneous, but it's not that bad.
 
New Rumor

Ok, moving on. Have you read the VGLeaks article about the Durango specs? Yes? Good because everything you read in that article was 100% correct. Except, for one tiny little detail that MS kept guarded from most devs until very recently. That detail being that every Durango ships with a Xbox 360 SOC.

There was a reason why MS hired so many former IBM and AMD employees. I'll admit I'm not an electrical engineer (I'm in software) so I won't pretend to know the ins and outs of how the 360 SOC integrates into the Durango motherboard. All I know, and all I need to know about this new change is that I (or a game dev) can use the 360 SOC in parallel with the original Durango hardware.

What does this mean in basic terms? Well, apart from Durango having 100% BC with the 360, it also increases Durango's processing power a fair amount.


http://www.neogaf.com/forum/showthread.php?t=541176
 
While suspending and resuming VMs are extremely easy,

Actually it's not in an environment that allows long running GPU tasks.
The VM doesn't help you with the you can't stop and restart the GPU where it left off problem. Though I guess there could be additional hardware in the GPU to help with this.
Of course the issue exists with or without the VM.

On the swapping games front you could use some of that rumored reserved RAM to give the new app some pages immediately, and have it start loading into those as you write the dirty pages from the old app. Or supply a small amount of flash so you could flush to that while the new app loads without impacting it's HDD performance, then trickle the flash contents to HDD on free cycles.
There would still be edge cases continually swapping games, but I would think for most typical scenarios, swap from game a to resume game b it would be fairly effective.
It would still take 10's of seconds to resume a title of course.
 
New Rumor

Ok, moving on. Have you read the VGLeaks article about the Durango specs? Yes? Good because everything you read in that article was 100% correct. Except, for one tiny little detail that MS kept guarded from most devs until very recently. That detail being that every Durango ships with a Xbox 360 SOC.

There was a reason why MS hired so many former IBM and AMD employees. I'll admit I'm not an electrical engineer (I'm in software) so I won't pretend to know the ins and outs of how the 360 SOC integrates into the Durango motherboard. All I know, and all I need to know about this new change is that I (or a game dev) can use the 360 SOC in parallel with the original Durango hardware.

What does this mean in basic terms? Well, apart from Durango having 100% BC with the 360, it also increases Durango's processing power a fair amount.


http://www.neogaf.com/forum/showthread.php?t=541176

The idea of substantial price cuts ever goes out the window. Keep in mind that the 360 has been out almost eight years and has only gotten a $100 price cut.
 
The VM doesn't help you with the you can't stop and restart the GPU where it left off problem.
"Long-running" GPU tasks in a console game is bound to be extremely relative, as even the most advanced shader program you're likely to run will complete in a tiny fraction of a frame's time, so all you have to do is wait until your shaders have finished running (and not issuing any more GPU jobs), and then switch the VMs...

...Although I must admit I'm at a loss understanding the practical benefits of making games run like VMs on a console. Where's the gain? Is it an anti-piracy measure first and foremost, somehow...?

I can't see the need, seriously. Any decently large game exploiting durango's large memory space would take upwards of half a minute, if not more to swap out to harddrive, then just as much to swap in the other game. That's a lot of thumbs-rolling for the user, and while perhaps faster than quitting back to the XMB or whatever the UIs will be called in next gen and then loading up a game + save from scratch it's not exactly nippy either. ...And if PS4 can load and run locally stored games from HDD in the same peacemeal fashion that it was claimed it streams games from the cloud during the presentation, it will probably be SLOWER instead.

So, I'm like :?: over this (rumored) feature. (I'm assuming today's CPUs can virtualize stuff without performance penalties. Even an el cheapo mobile CPU like jaguar.)

On the swapping games front you could use some of that rumored reserved RAM to give the new app some pages immediately, and have it start loading into those as you write the dirty pages from the old app.
If it's one thing you don't want on a HDD, it's head thrashing. Doing what you suggest would be immensely slower than just letting one VM save out in one go, and then loading everything else back in. Seeks are atrociously slow with mechanical drives, so unless there's a high-performance - meaning expensive - flash drive for caching VMs in every durango in addition to a xenon/xenos :)roll:), that idea wouldn't work too well.

...So yeah. Maybe there is a way to make durango cost $500. Just pile on with the crazyness, kinect, hardware BC and SSD in every box. Yay!
 
The idea of substantial price cuts ever goes out the window. Keep in mind that the 360 has been out almost eight years and has only gotten a $100 price cut.

I agree that this does hinder their ability to cost reduce the box over time but the fact that its only gotten a $100 price cut is because of a few things:

1. They've added value to whats in the box (bigger hard drives, games, kinect)
2. They haven't needed to
 
New Rumor

Ok, moving on. Have you read the VGLeaks article about the Durango specs? Yes? Good because everything you read in that article was 100% correct. Except, for one tiny little detail that MS kept guarded from most devs until very recently. That detail being that every Durango ships with a Xbox 360 SOC.

There was a reason why MS hired so many former IBM and AMD employees. I'll admit I'm not an electrical engineer (I'm in software) so I won't pretend to know the ins and outs of how the 360 SOC integrates into the Durango motherboard. All I know, and all I need to know about this new change is that I (or a game dev) can use the 360 SOC in parallel with the original Durango hardware.

What does this mean in basic terms? Well, apart from Durango having 100% BC with the 360, it also increases Durango's processing power a fair amount.


http://www.neogaf.com/forum/showthread.php?t=541176
360 SOC for coprocessing would give Durango the title for the most convoluted, strange and frankwnstein console in history.It has no sense and even with that it would be 300-400 GFlops short of power from the competitor.
Another thing would be a 360 compatible SOC with 12 CUs instead of Xenos for Crossfire with main APU.Then we would be talking.
 
I agree that this does hinder their ability to cost reduce the box over time but the fact that its only gotten a $100 price cut is because of a few things:

1. They've added value to whats in the box (bigger hard drives, games, kinect)
2. They haven't needed to

Eh? Point 1 is silly. At this point, Microsoft could stick in 500GB hard drive for $60. Having a "bigger" hard drive does not cost them more money compared to launch when they shipped with a 20GB drive and charged $399 for it. They are obviously milking money there. And the Kinect SKU with hard drive is $399, when Kinect must cost maybe $30 or so worth of parts?

It's all about point #2, they haven't needed to.
 
Status
Not open for further replies.
Back
Top