Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
All interesting theories but the fact of the matter is that there is only 47MB of USABLE memory/cache on the Xbox One SOC. And that ESRAM block between the CPU's in the die shot doesn't have some esoteric undocumented use case. It's simply for yield redundancy.

That wouldn't make sense, it is to far away from the rest. Also there is about 4mb we don't know yet what it is for.
By so many things, they tried to offload work from CPU, GPU and main memory, Sram for the page table would make sense, it would be much faster than in main memory.
 
It's simply for yield redundancy.
I don't think you'll convince anyone here that's the case without something as indisputable as the original design documents that state, "2 MBs of SRAM here for redundancy as it looks pretty instead of adding that cache into the cache blocks."
 
Was just answering the question. You'll find that its often difficult to convince anyone of anything once they have their mind made up. There are people still waiting for the turbo button to be pressed on the Xbox One. All I can do is provide the facts and its up to the reader to decide whether or not they want to believe it.

Although SRAM memory cells can have redundant lines, there are control, timing, and io circuits which cannot. Most yield analysis I've looked at would push people to PN cells without redundancy and simply include more entire subarrays. The layout decision could have been due to wanting to be physically separate from other defect regions and/or leveraging module layout to compress die size in some way.

When MS talked about it at hotchips their memory total was a sum of even the smallest caches available. They would not leave out MB's of page table cache or otherwise when talking up their chip if it was there to be used. But as you say probably not going to convince anyone without doing something to get myself in trouble which is fine with me.
 
Was just answering the question. You'll find that its often difficult to convince anyone of anything once they have their mind made up. There are people still waiting for the turbo button to be pressed on the Xbox One. All I can do is provide the facts and its up to the reader to decide whether or not they want to believe it.

Although SRAM memory cells can have redundant lines, there are control, timing, and io circuits which cannot. Most yield analysis I've looked at would push people to PN cells without redundancy and simply include more entire subarrays. The layout decision could have been due to wanting to be physically separate from other defect regions and/or leveraging module layout to compress die size in some way.

When MS talked about it at hotchips their memory total was a sum of even the smallest caches available. They would not leave out MB's of page table cache or otherwise when talking up their chip if it was there to be used. But as you say probably not going to convince anyone without doing something to get myself in trouble which is fine with me.

It is not about whether people have made up thier mind already. It's also not about one system vs another. People are having trouble believing you because your anwser doesnt make complete sense and also because you provide absolutely no proof.
 
Was just answering the question. You'll find that its often difficult to convince anyone of anything once they have their mind made up. There are people still waiting for the turbo button to be pressed on the Xbox One. All I can do is provide the facts and its up to the reader to decide whether or not they want to believe it.

Although SRAM memory cells can have redundant lines, there are control, timing, and io circuits which cannot. Most yield analysis I've looked at would push people to PN cells without redundancy and simply include more entire subarrays. The layout decision could have been due to wanting to be physically separate from other defect regions and/or leveraging module layout to compress die size in some way.

When MS talked about it at hotchips their memory total was a sum of even the smallest caches available. They would not leave out MB's of page table cache or otherwise when talking up their chip if it was there to be used. But as you say probably not going to convince anyone without doing something to get myself in trouble which is fine with me.

If you listen to the Hotchip event the engineers explain the ESRAM is repairable, so has little impact on yields despite its size. Being repairable there is little need to have that absolutely massive amount of redundancy in 2-4MB of redundant cache. Also its location on the chip makes no sense for it to be redundancy.

Here the developers discuss the esram cache
http://youtu.be/IZ-S3Trhg7Y?t=1h10m36s

http://youtu.be/IZ-S3Trhg7Y?t=1h13m20s
 
In this video (2:21) the presenter talks about resource tables for creating a complex set of resources which is already available on XB1 while it's new to Direct3D 12 and is useful alongside bundles for more efficiency.
 
This is random... but why does the SoC need such an immense array of bypass caps for such a low power requirement? It looks like they expected transient or noise issues and solved it with brute force. Or maybe they decided to spend more on caps to save even more on regulators. (PS4 is the opposite design with very few caps, but the regulators are probably more expensive)
 
This is random... but why does the SoC need such an immense array of bypass caps for such a low power requirement? It looks like they expected transient or noise issues and solved it with brute force. Or maybe they decided to spend more on caps to save even more on regulators. (PS4 is the opposite design with very few caps, but the regulators are probably more expensive)

maybe just to make sure they will be no problem at release date. in later designs they can redesign the thing and use fewer.
 
This is random... but why does the SoC need such an immense array of bypass caps for such a low power requirement? It looks like they expected transient or noise issues and solved it with brute force. Or maybe they decided to spend more on caps to save even more on regulators. (PS4 is the opposite design with very few caps, but the regulators are probably more expensive)

Well the device is designed to run nonstop for a decade. No console was ever engineered to run 24/7/365 and be very reliable with low repair rates. Maybe they want to reverse their name from RROD as much as possible, and have Xbox be linked to reliable. The power supply is excess as well iirc which will help with life. The power supply is external to maybe for heat and life.
 
Is this real?

h8IzfDw.png
 
Is it possible for XB1/Microsoft to go from "32MB eSRAM & 8GB DDR3" (16x4Gb module) to "32MB eDRAM & 8GB DDR4 (8x8Gb module)" in near future (1-2 years)? And what would be the effects of such a move? Smaller chip? Cheaper cooling system? Smaller/Cheaper XB1?
 
Is it possible for XB1/Microsoft to go from "32MB eSRAM & 8GB DDR3" (16x4Gb module) to "32MB eDRAM & 8GB DDR4 (8x8Gb module)" in near future (1-2 years)? And what would be the effects of such a move? Smaller chip? Cheaper cooling system? Smaller/Cheaper XB1?

Micron have a page explaining the benefits of switching to DDR4 from DDR3, however such a drastic change in performance and behaviour could have unpredictable consequences for software. Generally you don't want any part of the system, where timing is critical, perform differently.
 
Last edited by a moderator:
There's no point in going to edram, esram should scale very well with smaller nodes. If anything edram, may not be manufacturable on the newer processes.

I think they will move to lpddr4. Micron's roadmap for lpddr4 calls for 64-bit wide devices with 34GB/sec bandwidth. That's exactly half of the xbox memory bus bandwidth. So taking two such devices, they should be able to replace the 16 DDR3 chips that they have now with a smaller bus and what will be a very high volume part since it's very likely to show up in all mobile devices like tablets.

So yeah, that should result in a smaller and cheaper xbox, I doubt they could get that out by the end of 2015 though, if this has any potential I bet it's a 2016+ design. They'll have to find ways to cost reduce without major hardware revisions until then.
 
Micron have a page explaining the benefits of switching to DDR4 from DDR3, however such a drastic change in performance and behaviour could have unpredictable consequences for software. Generally you don't want any part of the system, where timing is critical, perform differently.

So this, all of the tricks around balancing ESRAM and DDR3 would break requiring every released title to be retested and validated in the best case or more likely a significant number of titles would just not work unless MS ponied up the cash to pay for patches (that's not even enough if the orig. team are no longer there to pay).

The XB1 is ESRAM + DDR3 for life, they will have signed long lead supply contracts to ensure that they have supply for the next 5-8 years as the rest of the industry moves to DDR4. They need these as plants are typically stripped out and refitted for newer production lines and technologies unless there is a customer willing to pay to keep 'obsolete' tech in production. It's the same as with GDDR3 last time and Sony and RAMBUS you don't rely on spot market pricing for components like this, you pay a manufacturer for a certain guarenteed annual volume and have flexible pricing for any quantities above that.
 
I'm just spitballing here but if adopting DDR4 brings about sufficient cost savings, it may be possible for Microsoft/AMD to revisit the memory bridges and modify them so that, to the rest of the APU, the DDR4 performance and behaviour is indistinguishable from DDR3.

Of course, Microsoft could migrate to DDR4 leaving devs to just deal with testing on two systems. Most game code typically isn't timing/cycle/latency critical, youjust want it to be as fast a possible and a change in RAM latency won't be noticeable - unless you are running real close to the metal or using software loop timings.
 
Micron have a page explaining the benefits of switching to DDR4 from DDR3, however such a drastic change in performance and behaviour could have unpredictable consequences for software. Generally you don't want any part of the system, where timing is critical, perform differently.

The 360 had to accommodate the same issues when they integrated the cpu and gpu on the same die. Its the reason the 360 now basically sports an on die FSB.
 
Last edited by a moderator:
I think they will move to lpddr4. Micron's roadmap for lpddr4 calls for 64-bit wide devices with 34GB/sec bandwidth. That's exactly half of the xbox memory bus bandwidth. So taking two such devices, they should be able to replace the 16 DDR3 chips that they have now with a smaller bus and what will be a very high volume part since it's very likely to show up in all mobile devices like tablets.

So yeah, that should result in a smaller and cheaper xbox, I doubt they could get that out by the end of 2015 though, if this has any potential I bet it's a 2016+ design. They'll have to find ways to cost reduce without major hardware revisions until then.

Is changing the bus that paramount?

Judging from the die shot, it'll be a long time before 256-bit bus gets in the way of die shrinks. Besides, they still have yet to use 8Gbit DDR3 chips. I know Micron didn't list x32 width DDR3, but that might be down to component ordering as Samsung does have a catalogue that shows it's a possible product.

i.e. it's not impossible for them to just halve the # of DDR3 chips for retail units while maintaining bus width. Relatively easy switch vs replacing the MC entirely.
 
Is changing the bus that paramount?

I don't know how paramount it is, but cutting in half should reduce chip size, chip packaging due to a smaller pin-out, simply the motherboard, and reduce power consumption since there is less IO to drive.

When I look at the X1 and thing about reducing cost (ignoring kinect), the obvious one seem die size of the APU, but after that I think you'll want to reduce the number of components on the motherboard and the cooling/powersupply.
 
Is changing the bus that paramount?

Judging from the die shot, it'll be a long time before 256-bit bus gets in the way of die shrinks. Besides, they still have yet to use 8Gbit DDR3 chips. I know Micron didn't list x32 width DDR3, but that might be down to component ordering as Samsung does have a catalogue that shows it's a possible product. Even the WiiU is using four x16 at 1600, while they could have used two x32 from samsung.

i.e. it's not impossible for them to just halve the # of DDR3 chips for retail units while maintaining bus width. Relatively easy switch vs replacing the MC entirely.
I wonder why we never see x32 anywhere. If the WiiU didn't, it's probably less expensive to use x16 in most cases.

Maybe heat issue? That would explain why the few x32 available are low frequency low voltage. Maybe there's a yield advantage, twice the chips with half the die area cost less than lower yield, bigger chips... or maybe it's just supply/demand, whatever DIMMs use will be the lowest cost, and it's x8/x16.
 
Status
Not open for further replies.
Back
Top