Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Of course, but it'll be better than nothing - which is what we have at the moment.

And ESRAM definitely wasn't included just to make DDR3 bearable - I've been told the designers realised that ESRAM could give the system a significant performance boost, especially for compute.

If that's the case you might wonder why AMD wouldn't use it and cash in the profits which go to the GDDR5 producers instead.
 
That would be up to the customer, per AMD's description of their semi-custom design process.
(edit: using what Microsoft paid for would be up to them, that is.)
 
Supposedly according to intereference who has a MS source, the 8GB DDR3 was implemented in late 2011, after it was decided the app OS wouldn't fit.

Presumably previous, the console had 4GB on a 128 bit bus? I guess that would have still necessitated ESRAM.
It was 4GB on a 256 bit bus.
 
That makes the choice of DDR3 + eSRAM at the expense of CUs very puzzling.

I guess the bottom line is we dont know which implementation (DDR3+ESRAM) or (GDDR5+more CU's) is cheaper, or indeed, more effective. It's seeming more and more like the consoles are at basic performance parity anyway (although more is needed to be known on this front). So at that point it only boils down to cost, now and future.

If what ERP posted in the other thread is an indication, it may be Kinect weighing more heavily on the XB1 BOM than I expected.
 
If that's the case you might wonder why AMD wouldn't use it and cash in the profits which go to the GDDR5 producers instead.

Because it requires more development effort to utilise properly? Especially only when a small subset of cards will be using ESRAM.

The XB1 architecture is going to be harder for devs to utilise to its full extent than than PS4 - a reverse of this gen - quick and dirty ports will look better on PS4 and devs will have to put in extra effort to get the XB1 version close to parity - and like this gen, I guess most devs will just go down the easy route.
 
Last edited by a moderator:
I guess the bottom line is we dont know which implementation (DDR3+ESRAM) or (GDDR5+more CU's) is cheaper, or indeed, more effective. It's seeming more and more like the consoles are at basic performance parity anyway (although more is needed to be known on this front). So at that point it only boils down to cost, now and future.

If what ERP posted in the other thread is an indication, it may be Kinect weighing more heavily on the XB1 BOM than I expected.

You can't look at the cost now, but what Microsoft was projecting both solutions to cost 1-2 years ago. Sony completely lucked out on having 8GB GDDR5.
 
There were other parties aware of and working towards the density upgrade.
The density trend was a reason as to why the AMD Radeon 7790 didn't increase its memory bus width and was launched with a constrained 1 GB memory load out when the design could have increased the width for the sake of capacity.

The question comes back to the strengths Microsoft thought it could leverage, and haggling with Samsung and Hynix is probably not at the top.
 
Yes I understand they are made for different design points. However, if 8GB GDDR5 was available 2 years ago when they were forming the console, it doesn't seem too far fetched that Microsoft would just scrap the whole eSRAM and just use GDDR5.

As I said, GDDR5 offers more bandwidth per pin, but uses a lot more power and has a worse long term cost profile.

Capacity per chip is a very secondary concern.

It doesn't seem logical to me that the eSRAM is a "core feature". If it is a core feature, it should be at the top of the design priority.

I'm sorry, what ?

What makes you think it isn't ? It's what enables them to have comparable bandwidth to Sony at a lower price point and with lower power consumption.

Cheers
 
What makes you think it isn't ? It's what enables them to have comparable bandwidth to Sony at a lower price point and with lower power consumption.

Cheers

Are either a lower price point or lower power consumption true right now?

If we look at the design end result of using eSRAM, we see the following

The good
+ Large pool of ram

The good, but with footnotes
+/- good bandwidth at the cost of certain restrictions and complexity

The bad
- GPU processing power

The I don't know what happened to the original plans but apparently things don't work out so well now
? cost
? power consumption/heat profile



My point is that having a lot of ram (and therefore using DDR3 at the time, as GDDR5 only had 2GB capacity) is a core function that Microsoft has put at the top of their list because they want to run tons of apps and allow seamless switching as a entertainment hub. It's a "core design". The Xbox One, as we can all see, was very much designed around this design goal and they achieved that with the 8GB.

On the contrary, eSRAM and the low latency is NOT a part of this main design goal, as it doesn't add much to this function. It is Microsoft trying to retain bandwidth for games while maintaining a large pool of ram, and most likely at the cost of Processing power (as it takes up APU die area, and to maintain cost it must sacrifice something). If gaming and graphics was on top of their design goal, we would see very different trade offs, and that that different trade off ended up materializing in the form of the competitor's console.


It's clear what the design goal is from the trade offs they made.
 
Last edited by a moderator:
I guess the bottom line is we dont know which implementation (DDR3+ESRAM) or (GDDR5+more CU's) is cheaper

We can offer a guess. Assuming XB1's SOC is 400mm^2, MS is looking at 120 dies from a 300 mm wafer. Assuming PS4's SOC is 300mm^2, Sony is getting 165 dies from a wafer.

Assuming yield is 65%, a wafer is $5000 a pop and 8GB DDR3 is $40 and 8GB GDDR5 is double that ($80) we get:
XB1: $64+$40 = $104
PS4: $47+$80 = $127

Cheers
 
We can offer a guess. Assuming XB1's SOC is 400mm^2, MS is looking at 120 dies from a 300 mm wafer. Assuming PS4's SOC is 300mm^2, Sony is getting 165 dies from a wafer.

Assuming yield is 65%, a wafer is $5000 a pop and 8GB DDR3 is $40 and 8GB GDDR5 is double that ($80) we get:
XB1: $64+$40 = $104
PS4: $47+$80 = $127

Cheers
You don't get consistent yields with larger dies sizes.
The larger the size, the more you take a hit on yields.

According to
http://www.soiconsortium.org/pdf/Economic_Impact_of_the_Technology_Choices_at_28nm_20nm.pdf
And only taking the die cost ratios, (which is consistent across the same wafers ranging from 50% yield to 70% yields)

Doubling the size of the APU results in 2.6~2.7 times more costs. (e^1)
adding 30% the size would thus be around 1.35 times the cost (e^0.3)
as a result your comparison would be

XB1: $63+$40 = $103
PS4: $47+$80 = $127

We're also not considering the apparent issues eSRAM is bringing into the manufacturing process here.


... I just figured your math is totally wierd. How are you getting $47 for PS4?
$5000/165*0.65 = $19.7

We also have to consider how current reports on BOM pricing is actually favoring PS4 over Xbox One by a healthy 30~50 dollars for some reason. Taking out the kinect doesn't seem to put the Xbox One much lower than the PS4 if it's even lower at all.
 
Last edited by a moderator:
We can offer a guess. Assuming XB1's SOC is 400mm^2, MS is looking at 120 dies from a 300 mm wafer. Assuming PS4's SOC is 300mm^2, Sony is getting 165 dies from a wafer.

Assuming yield is 65%, a wafer is $5000 a pop and 8GB DDR3 is $40 and 8GB GDDR5 is double that ($80) we get:
XB1: $64+$40 = $104
PS4: $47+$80 = $127

Cheers

Assuming the yields are the same.
 
That paper has certain assumptions baked in that make me dubious that yields can be that terrible, and we don't know the fab or the designs used for those numbers.

Why assume 0.8 Vdd? That takes away a common way to boost yields, and putting it that low can really hit a variation-prone process pretty badly.
I'm not sure which chips being produced in this segment even try idling that low.
 
That paper has certain assumptions baked in that make me dubious that yields can be that terrible, and we don't know the fab or the designs used for those numbers.

Why assume 0.8 Vdd? That takes away a common way to boost yields, and putting it that low can really hit a variation-prone process pretty badly.
I'm not sure which chips being produced in this segment even try idling that low.

I'm taking the cost increase multiplier that is a direct function of how large your SOC is, and this multiplier is independent on any particular yield (terrible or good) or process. It doesn't matter if the yield is 50% or 90%, the cost increase formula should be reasonably sound.

Or I've got my math messed up.



Using another yield formula, yield = 1/(1+(Area of chip*.5* defect per mm^2))^2, if we are to assume PS4 having 300mm^2 size and 65% yield, we end up with 400mm^2 chips having 57% yields.
 
Last edited by a moderator:
The numbers may not apply to any realistic solution developed for the consoles.
Other than just looking like they're pretty bad for rather small chips, there's a parametric component to the yields that complicates the area/failure relationship.

For that matter, there's an uncertain contribution to the Durango SOC's size that comes from the large amount of highly redundant SRAM. If there aren't general manufacturability issues, the total area of non-redundant logic that cannot tolerate defects may not be as unfavorable as pure size would indicate.
 
That paper has certain assumptions baked in that make me dubious that yields can be that terrible, and we don't know the fab or the designs used for those numbers.

Why assume 0.8 Vdd? That takes away a common way to boost yields, and putting it that low can really hit a variation-prone process pretty badly.
I'm not sure which chips being produced in this segment even try idling that low.

The paper is also a year old. A lot has happened since (especially at TSMC)

Cheers
 
The paper is also a year old. A lot has happened since (especially at TSMC)

Cheers

Defects don't magically disappear and allow you to maintain yields when you increase the SOC size.
Unless wafer manufacturing got turned upside down and we are to throw away what's been happening for 30+ years, yield percentages and SOC sizes are always negatively correlated and there are formulas to correctly model that.
 
Status
Not open for further replies.
Back
Top