Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Maybe not even that but from what I have read here, ESRAM can be combined in one APU while EDRAM has to be separate, ESRAM can be fabbed anywhere versus limited fab options for EDRAM. Those seem to be favorable cost points for ESRAM that may negate or at least mitigate the unfavorable size issue, when you also combine it with being more flexible technically it could be the better overall choice by MS reasoning.
Yeah, your right, and that may have been the emphasis in the decision.
 
I think you'd still want some level of MSAA (2x,4x) coupled with post AA next gen.
Post AA is poor at things like subpixel aliasing (eg. chain link fences, thin geometry viewed from far away etc).
 
As a side note, if it is better this gen to have a single big chip, the same may have true last gen too.
Nvidia G80 launched at +400mm^2, scaled down to 334mm^2 (@65nm) and finished at 270mm^2 on a 55nm process.

eDRAM served the 360 well but one may wonder if when it all said and done, if it was the right choice (not that the ps3 is a perfect piece of kit either). PS3 did quite well with a lot less bandwidth, we will never what could have achieved a system with plain UMA set-up ala PS4 (even looking at costs).

At the moment Nintendo takes a lot of bashing but I wonder if the CPU they chose for the Wii and WiiU could have proved a better choice than MSFT and Sony, speed demon throughput oriented design. It was damned tiny and power effficient, even 90nm process allowed for a sane multicores set-up (broadway is 19mm^2 according to the numbers on that very board).

OK I quit the a posterio thinking, not an option in the real world, not too mention I'm going off topic.
MSFT choice for Durango are sound and allowed for 8GB at a reasonable cost, if there are production issues it is another matter which might have nothing to do with the merit of design.
 
Last edited by a moderator:
I think you'd still want some level of MSAA (2x,4x) coupled with post AA next gen.
Post AA is poor at things like subpixel aliasing (eg. chain link fences, thin geometry viewed from far away etc).

Eh? I was under the impression that MSAA does nothing for chain link fences or grass, as they're usually rendered as a texture (possibly leading to heavy GPU lifting like alpha blending). That's why NV and ATI came up with all these new AA mode like AAA/TXAA [sic?] and other acronyms I can't remember.

Do game engines do things differently these days?
 
As a side note, if it is better this gen to have a single big chip, the same may have true last gen too.
Nvidia G80 launched at +400mm^2, scaled down to 334mm^2 (@65nm) and finished at 270mm^2 on a 55nm process.
I don't see how G80 was in any way related to last gen consoles? Plus, I remember the original die comprising some ~480mm² @ 90nm (if memory serves well - too lazy to check).

And finally: G80 in my perception basically was the last "big" GPU chip that didn't run into considerable manufacturing problems of some kind. AMD's RV770 (following their R600 disaster) basically marked the beginning of the reign of "sweet-spot" chips - and Nvidia hasn't exactly been very lucky with any of their huge monolithic dies after G80.

Returning to the topic at hand: Given the track record of large GPUs and the added complexity of the integrated APU design + their heavy customizations, going for a mainstream-chip (and we have to regard console chips as such - it's not some kind of small-volume-high-price-product on which you can accept/compensate below average yields) that is considerably bigger than ~300mm² (maybe even in the range of ~400mm²) basically was a tremendous risk for Microsoft/AMD.
 
I don't remember anyone saying it was 50% less powerful, people said the PS4 was 50% more powerful which was true & now it looks like it's going to be about 100% more power than the Xbox One.
There you have it!

onQ just double confirmed that Xbone is 100% underclocked!!!11!1

:LOL:
 
Eh? I was under the impression that MSAA does nothing for chain link fences or grass, as they're usually rendered as a texture (possibly leading to heavy GPU lifting like alpha blending). That's why NV and ATI came up with all these new AA mode like AAA/TXAA [sic?] and other acronyms I can't remember.

Do game engines do things differently these days?

I might have got confused with alpha textures used for chain link fences which MSAA can't help with and subpixel features where it can.

"We think the next step involves hybrid approaches combining MSAA (with low sample counts) with filter-based anti-aliasing techniques," says the Jimenez MLAA team.

"This would allow having a good trade-off between sub-pixel features, smooth gradients and low processing time."
http://www.eurogamer.net/articles/digital-foundry-future-of-anti-aliasing?page=2
 
I don't see how G80 was in any way related to last gen consoles? Plus, I remember the original die comprising some ~480mm² @ 90nm (if memory serves well - too lazy to check).

And finally: G80 in my perception basically was the last "big" GPU chip that didn't run into considerable manufacturing problems of some kind. AMD's RV770 (following their R600 disaster) basically marked the beginning of the reign of "sweet-spot" chips - and Nvidia hasn't exactly been very lucky with any of their huge monolithic dies after G80.

Returning to the topic at hand: Given the track record of large GPUs and the added complexity of the integrated APU design + their heavy customizations, going for a mainstream-chip (and we have to regard console chips as such - it's not some kind of small-volume-high-price-product on which you can accept/compensate below average yields) that is considerably bigger than ~300mm² (maybe even in the range of ~400mm²) basically was a tremendous risk for Microsoft/AMD.
OT
I will just answer you quickly as indeed the link I made between the on-going discussion about edram /scratchpad in Durango but also in the 360 is not that obvious. I was just pointing out that if single chip are an option this gen when wafer costs, costs for implementing silicon and overall R&D are at an all time high (and with the associated risks) it was also an option last gen, when all of those costs were lower. eDRAM is sort of parallel to that as a single chip would have meant chip bigger than either the Cell, Xenon, Xenos or RSX which allows for a wider bus even after a couples of shrinks (that is why I referred to the G80 and its heir, as an example, I did not mean that the single chip would have to be this big).
I think that the cost of wider bus is overstated, it was during the talk about the up coming consoles ultimately both systems use a 256 bit bus. I think it was an option last gen along with a single chip.

Anyway this is not a redo last gen hardware thread though whereas Durango solution seems pretty good (putting possible production issues aside /noise) I'm not sure that referring to last gen system is a good way to sell eDRAM or eSRAM / scratchpad, as I think that both the 360 and the ps3 were in fact suboptimal on more than one account (cf my ref to Nintendo that may have chose a more proper CPU architecture that both MSFT and Sony which have vouch for a more similar CPU this gen as I would say that Jaguar are closer to broadway than to either the Cell or Xenon).
 
Last edited by a moderator:
Can you show me something that is actually eSRAM that is at the scale of 32MB ?.

The Wii U 32MB cache is EDRAM, the IBM massive 80MB is EDRAM. Haswells 128MB is EDRAM

Do you not see a pattern? Theres a reason they are all EDRAM.

AFAIK, Wii's uses a seperate die for DRAM mounted side by side in a single package with the CPU and GPU, much like the original Xbox 360.

Microsoft appearently went with 32 MB fast refresh-free static RAM running synchronous (guess sync'ed to the on-chip bus). It will save you power (less static currents), but it's killing for yield. As someone explained, you can repair SRAM, but only so much. If the yield is horrible, they have to throw away the complete die, which for a big die is quite costly.

SRAM also saves you extra mask costs compared to on-chip eDRAM, so it's not surprising they went this route. If latency was a concern, they also could have gone with a small DRAM inside the package.
 
AFAIK, Wii's uses a seperate die for DRAM mounted side by side in a single package with the CPU and GPU, much like the original Xbox 360.
No, Wii U integrates it on the GPU die.
Microsoft appearently went with 32 MB fast refresh-free static RAM running synchronous (guess sync'ed to the on-chip bus). It will save you power (less static currents), but it's killing for yield. As someone explained, you can repair SRAM, but only so much. If the yield is horrible, they have to throw away the complete die, which for a big die is quite costly.
Then it's a severe planning mistake, not enough redundancy. If the yield would be so bad that some redundancy in the SRAM is not helping, one wouldn't get a single CPU core or the GPU working (as one has a lot of non-redundant parts there). As others have said already, getting SRAM to yield is one of the easier problems.
 
Last edited by a moderator:
Microsoft appearently went with 32 MB fast refresh-free static RAM running synchronous (guess sync'ed to the on-chip bus). It will save you power (less static currents), but it's killing for yield. As someone explained, you can repair SRAM, but only so much. If the yield is horrible, they have to throw away the complete die, which for a big die is quite costly.
.

SRAM will have much higher leakage than eDRAM would even with a refresh and it will only get worse as nodes shrink.

IBM paper on Big Blue.

Standby power: As semiconductor scaling proceeds
beyond the 130-nm generation, the device off currents
show an alarming increase; as shown in Figure 4 [2],
the standby power density begins to approach the
active power density of the chip. In the case of
memory, a static memory cell with six transistors
tends to have ;1000 times more leakage current on
a per-cell basis than a dynamic cell. At the memory
macro level, one has to consider the standby current
of the peripheral devices and the various internally
generated power-supply circuits, and the refresh
current needed for the dynamic memory. When all of
these are included, embedded DRAM tends to have a
6x to 8x advantage over embedded SRAM. Active
power tends to be comparable and is dictated by the
performance and memory bandwidth used,

http://www.d.umn.edu/~tkwon/course/5315/HW/BG/BG_DRAM.pdf
 
So far it's still unfounded rumors originating from NeoGAF. Not something to worry about or take stock in unless its being repeated by more place with their own sources.
 
Can you show me something that is actually eSRAM that is at the scale of 32MB ?.

The Wii U 32MB cache is EDRAM, the IBM massive 80MB is EDRAM. Haswells 128MB is EDRAM

Do you not see a pattern? Theres a reason they are all EDRAM.




your making a large assumption that the IBM guys are actually working on XBONE. Isn't it more likely they are being used for Oban the 360 Shrink?.

Yes, I do see the pattern. Hence what I have been saying about the 1T-SRAM. I know there is a reason they are using EDRAM. That is part of my point. While it is possible that it is 6T or 8T with a transistor count argument I think there is another good argument that it is a type of SRAM physical IP that is actually EDRAM at the core:

Due to its one-transistor bit cell, 1T-SRAM is smaller than conventional (six-transistor, or “6T”) SRAM, and closer in size and density to embedded DRAM (eDRAM). At the same time, 1T-SRAM has performance comparable to SRAM at multi-megabit densities, uses less power than eDRAM and is manufactured in a standard CMOS logic process like conventional SRAM.

MoSyS markets 1T-SRAM as physical IP for embedded (on-die) use in System-on-a-chip (SOC) applications. It is available on a variety of foundry processes, including Chartered, SMIC, TSMC, and UMC. Some engineers use the terms 1T-SRAM and "embedded DRAM" interchangeably, as some foundries provide Mosys's 1T-SRAM as “eDRAM”. However, other foundries provide 1T-SRAM as a distinct offering.

In other words it is EDRAM at the memory cell level but is called eSRAM in the industry.



I think the reasons for the others using eDRAM apply to the Xbox One too. You see if from pretty low cost Wii U designs all the way up to big PPC server chips.



As for IBM I do not know but I saw the 32nm linkedin profile (32nm Xbox chip) so if that is real then we can say they worked on that particular project. But I think it is reasonable that they helped with the other project too. MS certainly built up their own team but that does not mean they turn away help from a good partner. I know that a company can have a great design but still benefit from IBM's help on some other areas related to manufacturing and yield. (I know from 1st hand experience.) MS team is not the size of IBM's team and it can really help to use proven blocks and to get help from the big guys regarding yield and manufacturability. MS has no fab, for example. That is a huge other expertise that IBM has and it makes a big difference when you input that expertise before tapeout through reviews, etc.
 
Controller Question - I assume this is the correct thread at this point.
From the following story - http://news.xbox.com/2013/06/xbox-one-controller-feature

The story only mentions rumble/vibration motors and then uses the word "haptic" to describe the feedback. Do we know for a fact that the ability to control pressure required on the triggers is present, or that there are just rumble motors present that affect the triggers? There is a big difference between the 2. Having force feedback/ pressure feedback in the triggers could be very nice, if it is just rumble motors I am no where near as intrigued.
 
Controller Question - I assume this is the correct thread at this point.
From the following story - http://news.xbox.com/2013/06/xbox-one-controller-feature

The story only mentions rumble/vibration motors and then uses the word "haptic" to describe the feedback. Do we know for a fact that the ability to control pressure required on the triggers is present, or that there are just rumble motors present that affect the triggers? There is a big difference between the 2. Having force feedback/ pressure feedback in the triggers could be very nice, if it is just rumble motors I am no where near as intrigued.
As far as I know, it's just tiny rumble motors in the triggers, no actual force feedback.
 
That seems odd considering the talk of "OMG they can make gun triggers that feel like REAL triggers." Something must have been mistaken somewhere. Bummer.
 
allegedly there is now a magnetic connector in the triggers and no more springs or dead zone so I do not know about the technology or how that would affect the trigger feel.
 
Status
Not open for further replies.
Back
Top