Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
If you watch any of the videos interviewing the engineers and people making the product, you realize they are very smart, passionate people and are very genuine.

to paint them as untrustworthy due to execs pushing for DRM or including Kinect or whatever seems short selling some very good people in the industry

I think the issue is them painting themselves as untrustworthy. Their fans were supportive originally.

The only real group that is complaining are the core gamers, and their ire was drawn mostly by technology decisions. DRM, perceived power, kinect, these are all technology driven concerns and can only be solved with modifications to the underlying technology. There's also no indication that org changes are a challenge right now, some would even argue that Mattrick was just getting in the way.

Those technical decisions arised from policy and business decisions, not the other way round.

Overclocking the GPU and ESRAM sounds good since it's a win overall if there's no downside.

Whether there are further organizational challenges, we will find out after Mattrick's successor take reign.
 
We still don't know the arrangement of environments. Is it a native hypervisor, under which the game OS and app OS are subordinate or is it a hosted hypervisor, subordinate to one of the console operating systems, ie like running VMWare under your main OS.

If the former, a large amount if memory may be useful if the hypervisor is arbitrating exchanges of information between the two operating systems. Ie it would make sense that the game OS could delegate duties to the app OS to free up its resources for running games. How about the other way, what if live tv could be arbitrated into the game OS? You see a TV in your game and screen is an actual live TV feed.

It's a native hypervisor, the Title and System OSs are both hosted by the Host OS which controls all inter OS communication and access to hardware.
 
I think the issue is them painting themselves as untrustworthy. Their fans were supportive originally.

.


sorry, but that makes no sense.. how in the world did passionate engineers working hard on a product that for all intents and purposes is a pretty damned fine machine and will bring much joy to many people actually DO to cause mistrust?

the only way I see it is the Core are resistant to change and are afraid they were losing something with DRM.


other than that your comment is not based in reality. they did nothing wrong or "untrustworthy" short of bad conveyance of information, features and benefits of a new idea.

sorry for the OT Mods.. but this revisionist painting of people as "untrustworthy" who did nothing wrong irks me
 
Last edited by a moderator:
I agree with that. What was the reason for a separate OS from the hypervisor? why not just have an underlying os stripped to barebones that is running a hyperv instance on top for the game? Why 3 os'?

Drawbridge
http://research.microsoft.com/en-us/projects/drawbridge/default.aspx#LibraryOS

Hardware-based Virtual Machines (VMs) have fundamentally changed computing in data centers and enabled the cloud. VMs offer three compelling qualities:

Secure Isolation: isolating applications so that an ill-behaved application can't compromise other applications or its host.

Persistent Compatibility: allowing host and application to evolve separately. Changes in the host don't break applications.

Execution Continuity: allowing applications to be freed of ties to a specific host computer. A running application isn't tied to the computer on which it was started, but can be moved from computer to computer across space and time within a single run.

Drawbridge combines two ideas from the literature, the picoprocess and the library OS, to provide a new form of computing, which retains the benefits of secure isolation, persistent compatibility, and execution continuity, but with drastically lower resource overheads.

The Picoprocess
The Drawbridge picoprocess is a lightweight, secure isolation container. It is built from an OS process address space, but with all traditional OS services removed. The application binary interface (ABI) between code running in the picoprocess and the OS follows the design patterns of hardware VMs; it consists of a closed set of 45 downcalls with fixed semantics that provide a stateless interface. All ABI calls are serviced by the security monitor, which plays a role similar to the hypervisor or VM monitor in traditional hardware VM designs.

While the Drawbridge picoprocess interface follows the design patterns of hardware VM interfaces, it uses a high level of abstraction. The Drawbridge picoprocess interface surfaces threads, private virtual memory, and I/O streams instead of low-level hardware abstractions like CPUs, MMUs, and device registers. These higher-level abstractions allow for much more efficient implementations of OS code hosted within the picoprocess. These higher-level abstractions also allow for much more efficient resource utilization.


The Library OS
A better sandbox container is a necessary, but not sufficient condition for greater scalability of virtualized applications. The key second ingredient is the library OS. A library OS is an operating system refactored to run as a set of libraries within the context of an application.

While Drawbridge can run many possible library OSes, a key contribution of Drawbridge is a version of Windows that has been enlightened to run within a single Drawbridge picoprocess. The Drawbridge Windows library OS consists of a user-mode NT kernel--informally referred to as NTUM--which runs within the picoprocess. NTUM provides the same NT API as the traditional NT kernel that runs on bare hardware and in hardware VMs, but is much smaller as it uses the higher-level abstractions exposed by the Drawbridge ABI. In addition to NTUM, Drawbridge includes a version of the Win32 subsystem that runs as a user-mode library within the picoprocess.

Upon the base services of NTUM and the user-mode Win32 subsystem, Drawbridge can run many of the DLLs and services from the hardware-based versions of Windows. As a result, the Drawbridge prototype can run large classes of Windows desktop and server applications with no modifications to the applications.
 
If they went to 12, then having 4 for the OS would make sense because the only way I could see it working without reducing the overall RAM bandwidth would be for that 4 to be on 128 bit bus with the games 8 on a 256 bit bus.
Not necessarily, they could add a second row, which DDR3 allows. We haven't seen the underside of the dev board, it could have 16 2Gb chips, which would be full width on the same lanes (with a chip-select). Easy board layout, it's mostly just vias directly under the chips.

That would preserve the full speed of 256bit wide for the entire address space. It adds electrical load on the controller, but it's no different than a PC with more than one unbuffered DIMM per channel. It would not need any board rework if we accept the rumor that devkits already have 12GB, which I guess would have to be made that way.
 
On the original Xbox, the serial breakout was on a riser board, and the points to connect it were still on retail units. Did that change with 360?
I've honestly never bothered dismantling a 360 to check.

Always thought it was one of the more ingenious things MS brought to the table. Using retail silicon and boards to the world of devkits.

You literally wouldn't recognize the board in a PS2 or PS3 devkit as being the same piece of hardware.
Yeah, you're probably right about 90% of the devkits. I had two, one of them the same kind we gave to developers, and one that was internal only. The one we gave to developers was 1GB, and a snazzy black and metallic blue. The other had a serial port breakout, but was also different in some way to the retail box, in that a retail box/normal devkit could never be made to break out the serial port. It may have been a simple efuse, leaving the rest of the board identical.
If I had to guess at why 3 OS', and I have no insight.
I'd suggest it was a technical solution to a political problem. They were probably required to be able to run Windows RT apps and they likely wanted to keep the GameOS as close to the existing bare bones OS as possible, with minimum isolation from the hardware.
That would dictate that the hypervisor sit under both OS'.
Partial bingo. It was also originally suggested as a way to control the resource allocations in a way that developers didn't have to think about them, the VM architecture just enforces it transparently.
 
Not necessarily, they could add a second row, which DDR3 allows. We haven't seen the underside of the dev board, it could have 16 2Gb chips, which would be full width on the same lanes (with a chip-select). Easy board layout, it's mostly just vias directly under the chips.

That would preserve the full speed of 256bit wide for the entire address space. It adds electrical load on the controller, but it's no different than a PC with more than one unbuffered DIMM per channel. It would not need any board rework if we accept the rumor that devkits already have 12GB, which I guess would have to be made that way.

I am worried that it might have another side effect. Some AMD CPUs are only certified to work with two DIMMs per channel at 1333 MHz. I am pretty sure two at 2133 MHz will be a problem.

Now maybe it works fine on this new APU. Maybe it can drive larger loads at higher speeds. Not sure.

But I am not sure which ones die out at which frequency, which is likely variable depending on the memory vendor, process node and design.

So I am worried that the 256 bit section would not be able to run at 2133 MHz anymore with twice the load.

Maybe it is no longer an issue with newer AMD memory controllers/processes. Anyone know?
 
Last edited by a moderator:
Yeah, you're probably right about 90% of the devkits. I had two, one of them the same kind we gave to developers, and one that was internal only. The one we gave to developers was 1GB, and a snazzy black and metallic blue. The other had a serial port breakout, but was also different in some way to the retail box, in that a retail box/normal devkit could never be made to break out the serial port. It may have been a simple efuse, leaving the rest of the board identical.
Partial bingo. It was also originally suggested as a way to control the resource allocations in a way that developers didn't have to think about them, the VM architecture just enforces it transparently.

Do you know if the VM can effectively control the various "variable" CPU load effects that are sometimes seen when gaming on Windows XP/Vista/7/8?

I would hate to see the Xbox One displaying little glitches here and there depending on which apps you installed and which were running. That is one reason I often prefer my 360 or PS3 even though I have a couple gaming PC's.

I don't want any of the variability and inconsistency of the gaming PC experience creeping into my consoles. And I don't want to fiddle around to minimize, trouble shoot and fix such stuff.


Also, I am used to about 1.36GB used (just looked) on my Windows 7 & 8 gaming machines for O/S and other stuff. Not sure what the number is on the XP box. I find 5GB way out of balance on a console.
 
I am worried that it might have another side effect. Some AMD CPUs are only certified to work with two DIMMs per channel at 1333 MHz. I am pretty sure two at 2133 MHz will be a problem.

Now maybe it works fine on this new APU. Maybe it can drive larger loads at higher speeds. Not sure.

But I am not sure which ones die out at which frequency, which is likely variable depends on the memory vendor, process node and design.

So I am worried that the 256 bit section would not be able to run at 2133 MHz anymore with twice the load.

Maybe it is no longer an issue with newer AMD memory controllers/processes. Anyone know?
When the chips are soldered directly on the board close to the SoC, it gets rid of all the issues of going through a DIMM connector with questionable contacts and impedance. More foolproof and less painful.

BTW, I seriously doubt they'll have 12GB. I'm just arguing for argument sake.
 
5GB does sound a bit overboard...

but that would possibly allow indie devs more flexibility with their games. instead of being stuck with 1 GB or something ludicrous like that...it affords them a little breathing room.

It also may be due to additional OS requirements and features. We don't have the full picture yet...so anything is really a possibility. I'd be pretty happy if they did up the RAM count while keeping the balance of the machine's parts. somewhat like what epic told microsoft before the final silicon...who's gonna complain about more cache space?

Here's a thought. They were talking about flash ram to quickly pull data from the HDD...what if that additional 1GB is reserved for that purpose?
 
How did the 12GB rumour even start up again? Was there a source?

I think it started here:

My source at Microsoft says they are playing with overclocking the GPU at 900 MHz,and ESRAM to 900 mhz.The is talk of upping the memory to 12 gigs of ddr3 leaving 7 gigs for gaming and 5 for OS.The memory upgrade is being discussed because the 3 OS are clunky and taking up a lot of memory to run as smooth as they want it too!

They are waiting for the higher ups to give the go on it.The hold up I was told was they we not getting an consistency percentage on yields?

Incidentally Shifty this is the sort of thing I would expect with the Windows group leading the console project. (I am talking about 5 (!)GB for the OS.) I think another mindset is needed to straighten out such situations before they go too far.

I hope it is not true, that is way out there. But if they were expecting to go against 2GB or 4GB then 3GB for gaming and 5GB for their monster OS might be what was being thought. Maybe they can get the guy who wrote the Amiga multitasking multi-user OS in 10's of kB to help them out. (I am kidding, a little bit.)


John C. Dvorak stated in 1996 said:
The AmigaOS "remains one of the great operating systems of the past 20 years, incorporating a small kernel and tremendous multitasking capabilities the likes of which have only recently been developed in OS/2 and Windows NT. The biggest difference is that the AmigaOS could operate fully and multitask in as little as 250 K of address space. Even today, the OS is only about 1MB in size. And to this day, there is very little a memory-hogging CD-ROM-loading OS can do the Amiga can't. Tight code — there's nothing like it.
I've had an Amiga for maybe a decade. It's the single most reliable piece of equipment I've ever owned. It's amazing! You can easily understand why so many fanatics are out there wondering why they are alone in their love of the thing. The Amiga continues to inspire a vibrant — albeit cultlike — community, not unlike that which you have with Linux, the Unix clone."
 
Last edited by a moderator:
Actually, the Amiga OS required just above 100k in normal resolution. Yes, my Amiga still works, it has a 25 years old Quantum SCSI HDD and a 1084 NTSC monitor. :LOL: They achieved that with lots of cheating and 100% assembly, it had zero memory protection, it was just a big beautiful pile of hacks. If you needed a driver for some new peripheral, you just drag and drop a .device file in the dev directory, and it just worked. It was a single 15kb file, not a .NET crap with useless UI from a 200MB download, trashing the windows registry, and reboot twice, for absolutely no reason.

Still, they were cheating. No memory protection, we have to be honest about it. Combined with it's unified memory, a crash would "display sounds" on your monitor, and "playback graphics" through your sound system. No system ever crashed so spectacularly. Ever.

Back on topic, as soon as you need a web browser, any attempt at making a super lean and mean OS will fail. I fully appreciate the memory reserves they want to have, I just think much more than 1GB is a bit crazy.
 
Actually, the Amiga OS required just above 100k in normal resolution.

...snip...

No system ever crashed so spectacularly. Ever.

Back on topic, as soon as you need a web browser, any attempt at making a super lean and mean OS will fail. I fully appreciate the memory reserves they want to have, I just think much more than 1GB is a bit crazy.

I agree. True on all points. I think the multitasking exec was a couple 10's of kB. But I said (twice) that I was kidding. But where I was not kidding was that it can be done well in less than 5GB for sure.

If my W7 & 8 gaming machines are at 1.36 GB with no effort to optimize... I think I read recently it was/is 32MB on 360? Even if they went 10x that is 320MB. 20x is 640MB. 100x is 3200MB. So 5GB? :oops: So I think this must not be true.

People will throw truck loads of rotten tomatoes at me but I would prefer a second tiny 2 or 4 core jag APU package (fanless) with 128/192 ALU and 1 or 2GB of LPDDR3 for the OS and apps. I don't trust the 3 OS and Windows baggage for a smooth and consistent experience.

So... ...will we be looking at "automatic OS updates" every Tuesday on the Xbox One?



Aside... ...are you sure the Windows black screen of death isn't more spectacular? My reaction as I tried to fix it was pretty spectacular.
 
No effort to optimize windows?

I don't see what the issue with the Hypervisor setup is. It should be very low overhead. I can see arguing that 3 GB for the System OS being too much, but running the game in a separate extremely light-weight game OS seems like it could be a very good decision.
 
Stupidly simple question: what is stopping whatever memory arrangement is in the Xbox One devkit to be used in retail? I mean, the PS4 devkits had 8GB long before it was announced that retail units would have that much space, right? And if devkits have more RAM than retail units, what amount would a final PS4 devkit have, 12GB?

What's also odd is that if a devkit does need more RAM than what a system has, why wouldn't devkit tools just use the system reserved RAM during testing? I doubt you'd need more than 1-2GB, the only reason it was impractical this time around was that 32MB is tiny!

Or maybe I'm just missing something here...
 
No effort to optimize windows?

I don't see what the issue with the Hypervisor setup is. It should be very low overhead. I can see arguing that 3 GB for the System OS being too much, but running the game in a separate extremely light-weight game OS seems like it could be a very good decision.

The extremely light-weight game OS does seem like a very good decision.

What I am concerned about is how well the hypervisor ensures a consistent availability of the CPU, GPU and memory resources. [Memory bandwidth and latency, not quantity.] I am assuming the big bloated OS and apps are still running in the background. Or is that a wrong assumption? I am assuming they are not frozen/suspended as the game runs.

I don't want inconsistent micro-glitch or any stuttering type behavior on my console.

I don't want to dig through forums that tell me which apps to kill before I start my game for the smoothest experience.

But maybe I seriously misunderstand the 3 OS/Hypervisor situation.

Is it a frozen/suspended apps situation? Or it is reserved CPU cores? What about GPU? There isn't a huge amount of GPU to split off and reserve.
 
Last edited by a moderator:
Stupidly simple question: what is stopping whatever memory arrangement is in the Xbox One devkit to be used in retail? I mean, the PS4 devkits had 8GB long before it was announced that retail units would have that much space, right? And if devkits have more RAM than retail units, what amount would a final PS4 devkit have, 12GB?

What's also odd is that if a devkit does need more RAM than what a system has, why wouldn't devkit tools just use the system reserved RAM during testing? I doubt you'd need more than 1-2GB, the only reason it was impractical this time around was that 32MB is tiny!

Or maybe I'm just missing something here...

I think they are saying that the 12GB dev kit was a PC (Some thing like 8 Core Sandybridge-E, 7970 GPU, 2x4GB DDR3 + 2x2GB DDR3. Not sure, just what I read and some interpolation).

Could someone clarify? Was/is there a dev kit like the Wired photos with the actual SoC and 12GB? Like the 360 dev kit with 1GB and Xenon/Xenos, not Power Mac.

Now perhaps they end up with 16x4G modules and 16x2G modules on the other side of the board. Maybe they already have it, and it is in the dev kit. Maybe the LED and threaded SMA connectors in the Wired photos is a dev kit not production version. Maybe if you unscrew that motherboard and look at the other side there are land patterns for 16 more memory modules already. But 32 memory modules total :oops:

Maybe they already thought about this and did it. Perhaps that Wired photo is the 2nd gen dev kit with alpha LED, 2x threaded SMA and 4 more GB of DDR3 soldered on the other side of the board where we can't see it.

So when the panic broke out the engineers said: "Ok, build with the dev kit BOM minus the alpha LEDs, minus the two threaded SMAs and change the clock multiplier to X. Time for lunch."
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top