Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Great article. Clearly MS expose their own view.

They seems to confirm my theory that next-gen system will CPU bound and Bandwith bound.

1) CPU seems to be central and next gen game will have CPU bottleneck more than anything.
They have choose CPU over GPGPU (due to the fact "that are quite a number of workloads that do not run efficiently on GPGPU") so that they created SHAPE, the up-clock , move-engine and other things.

2) Bandwith will bound and will impact the usage and efficency of the CU, ROP.
Regarding the ROP their exemple seems to imply that 32 ROP would be bottlenecked by bandwith.
They also seems to imply that X1 will have High coherent read bandwith. But I really do not know what it means.

Overall they seems to be very confident with their memory setting and seems to have choose their components amount according to the bandwith limits. The Balanced System theory.

In the end, to be fair, I liked what I have read about X1.

MOD: Removed platform comparison stuff
 
So is the 30GB/s coherent read for the GPU shared with the rest of the CPU bus?, it would seem to me from the diagrams from hot chips it is. Does this also mean that the GPU has no way of coherently writing?. There doesn't seem to be a GPU coherent write bus.
 
I am surprised about the CPU bound comments. Would still like to see what parts of a game bottleneck it.
 
I am surprised about the CPU bound comments. Would still like to see what parts of a game bottleneck it.
6 Jaguar cores at 1.6Ghz for a game (when the games was profiled).
Not that hard to believe at all especially for launch games.

CPU's are relatively weak, so games are going to have to take that into account moving forward.
Not compared to previous generation obviously, but compared to PC and the console as a whole in my view.
But like every console of the past (usually memory limitations), they will work around it and I think there's enough power in them to not become cement shoes.
 
I am surprised about the CPU bound comments. Would still like to see what parts of a game bottleneck it.

It's usually the thread submitting geometry, that becomes the issue.
Balancing the granularity of geometry submissions and the associated number of state changes is complicated.
 
So is the 30GB/s coherent read for the GPU shared with the rest of the CPU bus?, it would seem to me from the diagrams from hot chips it is. Does this also mean that the GPU has no way of coherently writing?. There doesn't seem to be a GPU coherent write bus.

Say what?
 
Last edited by a moderator:
Most interesting new points to me:

-Talk about why ESRAM was chosen.

-Mention 8GB was in mind early on (contrary to the fact that reliable sources say the spec was 4GB as late as late 2011)
Late 2011 _was_ early on. I think the decision was made in mid-2011 or so.

They still play this ridiculous power consumption game while the PSU specs are known.
You do know that a PSU is allowed to supply less than it's rated amount, yes? Also, beta/devkit PSUs are often hugely overpowered. My 360 devkit PSU was something like 300 watts.
 
Indeed, they do seem to imply that esram was chosen to allow for 8GB of main memory within a reasonable cost/power envelope while maintaining reasonable bandwidth as opposed to any latency specific advantages.

No. You can see in the Yukon leak (which predates the commentary of Baker in the article btw, Yukon was from mid-2010) that they wanted either 32MB of eDRAM or eSRAM back when they had 4GB of DDR4 RAM as the plan.

Im also loving the (finally) clear explanation around the esram bandwidth. We now have a real peak figure, an explanation for the missing write cycle and a believable sustainable bandwidth utilisation rate.

I guess you didn't see the first version of the article? Been known for a couple weeks now.
 
Say what?

They mention a coherent read advantage for the GPU but never a coherent write advantage, why forgo mentioning it at all? it appears it doesn't have a coherent link to the DDR3 to write via.

No. You can see in the Yukon leak (which predates the commentary of Baker in the article btw, Yukon was from mid-2010) that they wanted either 32MB of eDRAM or eSRAM back when they had 4GB of DDR4 RAM as the plan.



I guess you didn't see the first version of the article? Been known for a couple weeks now.

They never wen't with Yukon, we never got a leak early about the the roadmap for Durango, which is what they are commenting on.
 
Last edited by a moderator:
No. You can see in the Yukon leak (which predates the commentary of Baker in the article btw, Yukon was from mid-2010) that they wanted either 32MB of eDRAM or eSRAM back when they had 4GB of DDR4 RAM as the plan. .

As I mentioned before DDR4 RAM had even worse bandwidth than the DDR3 they have now. So it shouldn't be a surprise that they were aiming for as much RAM as possible, and back then the viable option was 4GB DDR4 and then 8GB of DDR3 looked like a better option and they switched, very possibly between mid 2011 and late 2011.

Both the 4GB DDR4 and 8GB DDR3 choice fit the argument that they went for as much capacity as possible, while at the same time having the eSRAM to supplement whatever main memory there is to resolve bandwidth issues. Saying it was 4GB DDR4 doesn't change the fact one bit why the eSRAM is there.



In any extent, it is clear as day in the interview that the eSRAM is there to resolve bandwidth issues instead of provide a latency benefit as many have claimed.
 
They mention a coherent read advantage for the GPU but never a coherent write advantage, why forgo mentioning it at all? it appears it doesn't have a coherent link to the DDR3 to write via.

What exactly are you suggesting?
That the coherent address would somehow turn into ROM when the GPU's accessing it?

That would just be ridiculously silly now...try learn what memory coherence means...and read the lines, not between the lines ;-)
 
What exactly are you suggesting?
That the coherent address would somehow turn into ROM when the GPU's accessing it?

That would just be ridiculously silly now...try learn what memory coherence means...and read the lines, not between the lines ;-)

im asking why are they not mentioning a coherent write advantage for the GPU if they had it, they seem to talking up _everything_ they have, yet they fore go talking about GPU coherent write at all? why?, if it uses the same link they clearly have the same advantage.

Coherent read requires that they snoop from the CPU's L2, coherent write requires them to either bypass the GPU caches or to flush the cache afaik.
 
im asking why are they not mentioning a coherent write advantage for the GPU if they had it, they seem to talking up _everything_ they have, yet they fore go talking about GPU coherent write at all? why?, if it uses the same link they clearly have the same advantage.

Coherent read requires that they snoop from the CPU's L2, coherent write requires them to either bypass the GPU caches or to flush the cache afaik.

They aren't necessarily 'talking up everything they have'. They are answering the questions posed by DF.

They never wen't with Yukon, we never got a leak early about the the roadmap for Durango, which is what they are commenting on.
I don't know what you are trying to say. Obviously they 'didn't go with' Yukon or we'd have that instead. It was the first iteration of their broad planning for the platform as a whole and it very clearly had input from the engineers as you can see from the leak's details. If you are trying to suggest that Yukon was in no way representing their thinking on the inclusion of the eSRAM then you're simply wrong because it was clearly mentioned as an option they were looking into even back then.

I know you're eager to perpetuate the narrative that the eSRAM is only there as a bandaid on a poor design by MS, but it's simply not true. It's there by virtue of the successes of the eDRAM on 360. It's explicitly spelled out in the DF article. No dancing around it.




Strange,

Both the 4GB DDR4 and 8GB DDR3 choice fit the argument that they went for as much capacity as possible, while at the same time having the eSRAM to supplement whatever main memory there is to resolve bandwidth issues. Saying it was 4GB DDR4 doesn't change the fact one bit why the eSRAM is there.
There is no intelligent argument. We have the word right from the horse's mouth! It's there because MS's engineers felt it was a great opportunity to take the eDRAM model from the 360 further and improve upon it. Yes, it comes with loads of other benefits too (latency, bandwidth), but trying to paint it as if it was a bandaid over the 'wound' of DDR3 is stupid.

Ppl just refuse to give MS credit where it's due on the memory front. Their setup has higher bandwidth than *that other console*, costs less, has the same capacity for gaming/OS applications, has lower latency, and is very arguably in a better position for cost reduction measures going forward. Yet nobody seems to want to give MS credit. So silly.
 
They aren't necessarily 'talking up everything they have'. They are answering the questions posed by DF.

Whereas we've said that we find it very important to have bandwidth for the GPGPU workload and so this is one of the reasons why we've made the big bet on very high coherent read bandwidth that we have on our system.

a constant mention of read and a complete lack of them even saying they have a write advantage when its relevant to the question?. It seems they didn't mention it because they don't have it.
 
im asking why are they not mentioning a coherent write advantage for the GPU if they had it, they seem to talking up _everything_ they have, yet they fore go talking about GPU coherent write at all? why?, if it uses the same link they clearly have the same advantage.

Coherent read requires that they snoop from the CPU's L2, coherent write requires them to either bypass the GPU caches or to flush the cache afaik.

and I just don't understand why saying it's coherent is not enough already....
 
There is no intelligent argument. We have the word right from the horse's mouth! It's there because MS's engineers felt it was a great opportunity to take the eDRAM model from the 360 further and improve upon it. Yes, it comes with loads of other benefits too (latency, bandwidth), but trying to paint it as if it was a bandaid over the 'wound' of DDR3 is stupid.

You're just saying the same thing, I acknowledge it's a package and I didn't suggest that the eSRAM is a band-aid solution, but it, however IS there to provide the bandwidth that the system will lack when DDR3/DDR4 was chosen. If you think this is a "band-aid" solution then you will find many engineering designs to be full of band-aids.

The fact is that the fast eSRAM and the large DDR3/DDR4 were decided at the same time. Deciding on one comes at the decision of using the other. You don't choose DDR3 and then ask "hmm, do I add a eSRAM there to fix bandwidth or not?" or choose eSRAM and then ask "hmm, Do I add a large memory pool of DDR3/DDR4 there or not". They most likely (as they have stated) came across with the same design choices as Sony, having 1 large pool of GDDR5 or DDR3/4+ eSRAM and decided that choosing DDR3/4 + eSRAM was a better fit to their business/design goals/experience and went from there.

The large memory opens up multimedia opportunities and they expect the eSRAM to do what the eDRAM did for the Xbox 360, which isn't a bad choice nor a bad expectation. Which was finalized first doesn't change the fact that eSRAM is there to provide bandwidth and the main RAM is there to provide capacity.


Ppl just refuse to give MS credit where it's due on the memory front. Their setup has higher bandwidth than *that other console*, costs less, has the same capacity for gaming/OS applications, has lower latency, and is very arguably in a better position for cost reduction measures going forward. Yet nobody seems to want to give MS credit. So silly.

You're only pointing out the advantages (and ignoring some of the consequences/relevency of the advantages) of one system while ignoring the disadvantages. Why not list the advantages of the other design?
There is a reason why people are giving thumbs up to one console for having 8GB of GDDR5.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top