Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Yeah, I appreicate that. But you raised the issue of XB1's heatsink, implying it was that big to deal with heat issues.
I commented it was "comically large" because it is comically large! :smile: But I assume that Microsoft's engineers aren't fools and that it's that size for a reason - to cool the system at the leaked target 1.6Ghz/800Mhz APU clocks while being "nearly silent".

But it was the trying to reconcile the rationale that Sony/TMSC may have yield issues for their moderately simple Jaguar derivative at 1.6Ghz while Microsoft/TMSC were ok, and could even ramp up the clocks, for their design. That's nuts.

But back to speculated upclock, and that comically large cooling solution, a last minute upclock will result in more heat which means an upgraded cooling solution (which they'll need millions of) or running the existing solution at a higher rate - if that's even possible. Someone will always suggest "just run the fan at a higher rate, easy!" but often it's not that easy. How much noise do the fans make running higher than planned? If the fans are rated to last 8 years at x RPM, what is their reduced lifespan at x+1000 RPM? Engineering is all about choices, often related to costs of components, like this. There's no such thing as a small change.

Assuming the heat difference between PS4 and XB1 isn't massive, than the heating requirements would be the same, and MS's choice of oversized HSF could point to wanting quieter operation (potentially Sony could have a better overall airflow and cooling design), which means MS has more capacity to deal with heat by running louder.

I'm assuming the difference between internal thermal environment between the two consoles will be fairly significant. Sony have a much smaller case, an internal PSU, more cores, GDDR5 and allowing the user to slot in any HDD means it'll have to deal with hot 7200rpm drives to boot. The case looks as though it has numerous side intakes, a design which is popular with Apple and which works well to keep MacBookPros and MacMinis cool. I've no doubt Sony have a more sophisticated (and expensive) cooling solution, they simply have too, but I'd very much like to see a tear down of both cooling solutions.

I'm not really arguing in favour of that point - only that, the way you phrased yourself, if case design is an issue, I'd give MS the advantage (at a given volume level, of course. Nothing to stop Sony putting an leaf-blower in there...).
I don't think it's a case design issue, I don't think there is anything Microsoft can do, this late in the day, to increase clocks if they've contracted fabrication of millions and millions of APUs binned for the original target clocks. Because if they upclock these, they will know that a percentage of these chips will have been binned for reasons not related to thermal tolerances, and that they will fail. And after RRoD? I do not believe Microsoft would take that decision. And nor would Sony.

And any sane prospective Xbox One owner, would not want Microsoft to do this. Because it could be their Xbox One that conks out on the day they get it home. Not cool :nope:
 
Last edited by a moderator:
Yep. The same folks were probably convinced the 360 was way weaker than the PS3 too.

Depends on the devkit. Internal devkits have serial breakout cables that allowed us to debug the hypervisor, and my devkit also had 1GB of RAM and a couple of other bells and whistles. (Talking about the 360 here, in case anyone wonders.)

On the original Xbox, the serial breakout was on a riser board, and the points to connect it were still on retail units. Did that change with 360?
I've honestly never bothered dismantling a 360 to check.

Always thought it was one of the more ingenious things MS brought to the table. Using retail silicon and boards to the world of devkits.

You literally wouldn't recognize the board in a PS2 or PS3 devkit as being the same piece of hardware.
 
My source at Microsoft says they are playing with overclocking the GPU at 900 MHz,and ESRAM to 900 mhz.The is talk of upping the memory to 12 gigs of ddr3 leaving 7 gigs for gaming and 5 for OS.The memory upgrade is being discussed because the 3 OS are clunky and taking up a lot of memory to run as smooth as they want it too!

They are waiting for the higher ups to give the go on it.The hold up I was told was they we not getting an consistency percentage on yields?

I assume the yield consistency makes it more of an impact to cost and thus needing a sign off from the "higher ups."

So the GPU/ESRAM are possibly being clocked higher but not the CPU?
 
My source at Microsoft says they are playing with overclocking the GPU at 900 MHz,and ESRAM to 900 mhz.The is talk of upping the memory to 12 gigs of ddr3 leaving 7 gigs for gaming and 5 for OS.The memory upgrade is being discussed because the 3 OS are clunky and taking up a lot of memory to run as smooth as they want it too!

They are waiting for the higher ups to give the go on it.The hold up I was told was they we not getting an consistency percentage on yields?

5 for the OS? You may need some RAM to standby games while browsing the dash but even Windows 8 runs with 2GB RAM. The Hypervisor in X1 shouldn't eat a lot of resources.
 
I usually don't say this sort of thing, but it seems to fit here "5GB for an App OS is insane". Now perhaps the thought is that the game devs only have 7/7.5 GB on the PS4 so why spend more memory on the games when it may not be used by cross-platform devs. Perhaps 7.5GB game, 3.5GB OS, and 1GB hypervisor ... though a proper hypervisor shouldn't even need 256-512Meg.
 
Hmm... I don't know. It sounds like a risky move (that is not difficult to counter). If the software is inefficient, improve the software.

If they keep stacking on the hardware to hide/accommodate issues, it's just shifting the problem. Their competition will be able to provide a cheaper and more focused platform. They can introduce add-ons, shove non-gaming stuff to the servers, or partner with iOS/Android.

If true, it sounds like an early plan from engineering, and need marketing + business approval ?
 
If they went to 12, then having 4 for the OS would make sense because the only way I could see it working without reducing the overall RAM bandwidth would be for that 4 to be on 128 bit bus with the games 8 on a 256 bit bus.

I would assume MS is aware of what Sony's OS reserve actually is, the same way I know Sony is aware of MS's via 3rd parties. Though I guess they had more of a motivation to tell Sony than the other way around.

I just have to wonder how bad loading times are going to be this generation spinning disks didn't suddenly get a lot faster and we're now talking about having 20x the data to load to start a game.
 
I usually don't say this sort of thing, but it seems to fit here "5GB for an App OS is insane". Now perhaps the thought is that the game devs only have 7/7.5 GB on the PS4 so why spend more memory on the games when it may not be used by cross-platform devs. Perhaps 7.5GB game, 3.5GB OS, and 1GB hypervisor ... though a proper hypervisor shouldn't even need 256-512Meg.

5 gigs is a lot for an OS,but I wonder is some of it for future feature that can be added with out draining the original 5 gigs for gaming and 3 for OS pool?I don't think the 5 is for the OS to run at the whole 5 gigs.I think its there to give it cushion for the switching between apps and suspend states.I think it was more of a move of more memory for gaming and smoother OS system in all states.I don't know all the details 100 percent just passing on what I was told.
 
If they went to 12, then having 4 for the OS would make sense because the only way I could see it working without reducing the overall RAM bandwidth would be for that 4 to be on 128 bit bus with the games 8 on a 256 bit bus.

I would assume MS is aware of what Sony's OS reserve actually is, the same way I know Sony is aware of MS's via 3rd parties. Though I guess they had more of a motivation to tell Sony than the other way around.

I just have to wonder how bad loading times are going to be this generation spinning disks didn't suddenly get a lot faster and we're now talking about having 20x the data to load to start a game.

Mandatory install will help a bit but they should've included SSDs.
 
I usually don't say this sort of thing, but it seems to fit here "5GB for an App OS is insane". Now perhaps the thought is that the game devs only have 7/7.5 GB on the PS4 so why spend more memory on the games when it may not be used by cross-platform devs. Perhaps 7.5GB game, 3.5GB OS, and 1GB hypervisor ... though a proper hypervisor shouldn't even need 256-512Meg.

I agree with that. What was the reason for a separate OS from the hypervisor? why not just have an underlying os stripped to barebones that is running a hyperv instance on top for the game? Why 3 os'?
 
Mandatory install will help a bit but they should've included SSDs.

A large capacity SSD is just not feasible cost wise.
And I don't think the mandatory install helps that much, there were several PS3 games that had better performance of Bluray than HDD because they could read from both simultaneously.
The
 
If they went to 12, then having 4 for the OS would make sense because the only way I could see it working without reducing the overall RAM bandwidth would be for that 4 to be on 128 bit bus with the games 8 on a 256 bit bus.

I would assume MS is aware of what Sony's OS reserve actually is, the same way I know Sony is aware of MS's via 3rd parties. Though I guess they had more of a motivation to tell Sony than the other way around.

I just have to wonder how bad loading times are going to be this generation spinning disks didn't suddenly get a lot faster and we're now talking about having 20x the data to load to start a game.

Is there a point where these systems cant make worthwhile use of "more RAM" due to some other bottleneck in their hardware?
 
I agree with that. What was the reason for a separate OS from the hypervisor? why not just have an underlying os stripped to barebones that is running a hyperv instance on top for the game? Why 3 os'?

If I had to guess at why 3 OS', and I have no insight.
I'd suggest it was a technical solution to a political problem. They were probably required to be able to run Windows RT apps and they likely wanted to keep the GameOS as close to the existing bare bones OS as possible, with minimum isolation from the hardware.
That would dictate that the hypervisor sit under both OS'.
 
MS's fundamental issues are not technical or technology related. It's the lack of (consumer) trust, and internal organizational challenges.
 
MS's fundamental issues are not technical or technology related. It's the lack of (consumer) trust, and internal organizational challenges.


If you watch any of the videos interviewing the engineers and people making the product, you realize they are very smart, passionate people and are very genuine.

to paint them as untrustworthy due to execs pushing for DRM or including Kinect or whatever seems short selling some very good people in the industry
 
If the dev kits have 12gb then they could roll with that layout for retail kit perhaps. Maybe its a mix of 4gb/8gb modules.

Good point BRiT, that is what they did when they doubled the RAM for 360 prior to launch - just use the dev kit board.

As far as this potential upclock is concerned? Is it the CPU and GPU or just GPU?
AFAIK, just GPU

but cboat never said there was a downclock, that's what people with absolutely no reading comprehension who conflate multiple posts by multiple people came up with. he said there was yield problems.

Look, I don't know what the whole story is but one of the original posters of the down clock rumour has told me that they got that info from CBOAT himself.
Rangers has heard that CBOAT was behind it as well.

Now obviously he didn't state it publicly, however he never once addressed the downclock rumours (eg. saying he doesn't know about a downclock, only their yield issues etc) so his behaviour can be taken as tacitly supporting the downclock rumour.

So, all that considered, CBOAT being the behind the rumours seems quite plausible (would certainly explain why all the GAFsiders started parroting it but why no one outside GAF had heard it)

Unless an overclock is incredibly easy and falls well into what they consider safe limits, I can't see them bothering.

That seems to be the case, otherwise why would they bother indeed.

This all goes to what what Microsoft internally considers good enough yields and where the actual yield distribution ends up.

If they can hit their "good enough" line at a higher performance target and see a higher upside than the incremental good die increase from staying put, it's their call. We're not going to deduce that with the information on hand.

Yup, i've been told much prior to these current rumours that the hw does have enough headroom for a moderate clock increase if MS want to eat the cost of yields.
 
MS's fundamental issues are not technical or technology related. It's the lack of (consumer) trust, and internal organizational challenges.

The only real group that is complaining are the core gamers, and their ire was drawn mostly by technology decisions. DRM, perceived power, kinect, these are all technology driven concerns and can only be solved with modifications to the underlying technology. There's also no indication that org changes are a challenge right now, some would even argue that Mattrick was just getting in the way.
 
Yes, I know that. But leakage and heat and clocks and voltage are interrelated. What is the voltage and clock? At such a low clock one would expect a correspondingly low voltage and leakage. Look at the voltages the related commercial products run at versus the clocks they can run at.
Heat is certainly related to voltage. But a chip not being stable at certain clocks because the physical structure of the chip - like the interconnects or gates just not being as sound as the next chip - is different. Almost nothing manufactured is over-engineered these days, not like how it was 20-30 years ago when things were built to last. Things are built to spec and nothing more.
True, but I think they have imposed a limit well below what the chip can do. Tahiti is solidly in the regime you are talking about. A chip of a similar die size, with higher transistor count and one year later on the same TSMC process but at 100W not 250W (not sure exactly how to divide between Tahiti and the GDDR5) is not running near max clock and thus is also not near max voltage and leakage.
If Microsoft contract, say for example, TSMC to produce 6 million APUs clocked at 1.6Ghz/800Mhz, then TSMC will fab and test to that specification. It's pass or fail. They will not spend time testing at 1.7Ghz/800Mhz or 1.6Ghz/900Mhz or any other variation that Microsoft has not specified, because that's not what is required.

Of course it's possible that Microsoft really did contract with the fabricators for this extra testing and binning but - given the advance timescales on fabricating millions of chips - they would have had to have doubts about final specifications, probably as early as last year and certainly before Sony announced the PlayStation 4. You don't just go to your fab guys and say we want all this extra binning done, they'll have production schedules for other customers. It's not like ordering a six inch sub then quickly changing your mind and going for the full foot-long sub!
 
I usually don't say this sort of thing, but it seems to fit here "5GB for an App OS is insane". Now perhaps the thought is that the game devs only have 7/7.5 GB on the PS4 so why spend more memory on the games when it may not be used by cross-platform devs. Perhaps 7.5GB game, 3.5GB OS, and 1GB hypervisor ... though a proper hypervisor shouldn't even need 256-512Meg.

We still don't know the arrangement of environments. Is it a native hypervisor, under which the game OS and app OS are subordinate or is it a hosted hypervisor, subordinate to one of the console operating systems, ie like running VMWare under your main OS.

If the former, a large amount if memory may be useful if the hypervisor is arbitrating exchanges of information between the two operating systems. Ie it would make sense that the game OS could delegate duties to the app OS to free up its resources for running games. How about the other way, what if live tv could be arbitrated into the game OS? You see a TV in your game and screen is an actual live TV feed.
 
Status
Not open for further replies.
Back
Top