Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
People complain that all they know about is peak figures therefore MS PR & FUD.
They get told how the peak is calculated and what actual measured usage is. People complain MS PR & FUD.

Guess they have to release every single detail possible, but I have a feeling it will still simply MS PR & FUD.

Details could be released by 3rd party developers and it will be MS money hat if it doesn't crap on X1.

If Sony would pony up some more details that would be cool to :) I sure don't doubt 150 gb under whatever circumstances I am just curious as to when those circumstances are and if ms will be doing anything to help devs increase the range of circumstances when they get these numbers. If it is a lock that 150+ is going to be common, with or without ms intervention (in the form if drivers and such) then that will be news.
 
Seriously just take the 109Gbps + 68Gbps and believe whatever you like.
But then you probably would say that you can't add the BW together. ;)

This was what the MS Fellow said as well. IMHO the MS guys seem to throw in enough caveats so it's not a major PR thing. MAJOR Nelson's bandwidth for the 360 is a milestone in that department :D
 
No the 200GB/s is real world...according to DF doc anyway... The combined peak of ddr3 and esram is something like 270GB/s.

That 0.04%, if used correctly will be used really heavily as the stages in pipeline use it to store intermediate results. It punches well above it's weight for such a little guy....

What?

176 GBs is real world too.

You don't have to travel 60 miles in an hour in order to go 60 mph.

eSRAM can't sustain its 200 GB/s over a full second nor can gDDR5 sustain 176 GBs over a full second.

I don't think you can readily compare the two system since

1. We know relatively little of the eSRAM in terms of timing or configuration.

2. We know relatively little how either system deals with memory contention introduced by having off chip DRAM that services a cpu, gpu and a number of other processors that may or may not exist on either core. The memory system backed by gDDR5 on a discrete gpu doesn't have worry about timely addressing the needs of a cpu. And CPUs aren't use to having to share off chip memory with thousands upon thousands of gpu threads vying for accesses to memory.

3. One system has a unified memory system while the other has a split memory system where two memory pools play different roles.

4. THE MODS WILL BAN YOUR SORRY HIDE

There are too many variable and too many unknown to readily make an informed opinion.

MOD : and too many eager bansticks terminating the conversation before it can be suitably well informed (within the limits of public data) and hence be full of fanboy trolls rather than real, technical debate
 
Last edited by a moderator:
I though that was what they were getting. Why mention ~150GB/s as typical usage and not just leave the peak bandwidth figure out there?

Perhaps I misread the claims but I never saw anything about typical usage.

If they had specifically used the words typical or average in relation to the 150GB/s figure then the last few posts would make more sense but all I've read is this is what they've measured.

So take an average game scene over the space of 10 minutes, measure esram bandwidth utilization over that period and plot it in a line chart. How often do you think the measured bandwidth would hit 150GB/s? Would it hover fairly flat around the 150GB/s mark? Would it peak and trough above and below that mark with 150GB/s being roughly average? Or would it peak and trough with the peaks occasionally (or even extremely rarely) hitting that mark but on average being much lower?

I don't know the answer to the above but without that then Microsofts statement can mean anything from very rare 150GB/s peaks with a much lower average to 150GB/s average with much higher peaks.
 
3. One system has a unified memory system while the other has a split memory system where two memory pools play different roles.

There are too many variable and too many unknown to readily make an informed opinion.

Aren't 8GB DDR3 unified memory? aren't 512MB on Xbox 360 an unified memory system?
 
Perhaps I misread the claims but I never saw anything about typical usage.

If they had specifically used the words typical or average in relation to the 150GB/s figure then the last few posts would make more sense but all I've read is this is what they've measured.

So take an average game scene over the space of 10 minutes, measure esram bandwidth utilization over that period and plot it in a line chart. How often do you think the measured bandwidth would hit 150GB/s? Would it hover fairly flat around the 150GB/s mark? Would it peak and trough above and below that mark with 150GB/s being roughly average? Or would it peak and trough with the peaks occasionally (or even extremely rarely) hitting that mark but on average being much lower?

I don't know the answer to the above but without that then Microsofts statement can mean anything from very rare 150GB/s peaks with a much lower average to 150GB/s average with much higher peaks.

Seriously, they've said that the real application measure at 150Gbps, rather than the theoretical peak of 204, what more do you want?

Of course if you're hitting the same area over and over and over again, you don't get to spread out your bandwidth and so that's one of the reasons why in real testing you get 140-150GB/s rather than the peak 204GB/s.
This is getting in the conspiracy theory territory where basically you are trying too hard to to look for "clues" to fit your own conclusion.

And you know what, they always will find the "evidence" no matter what facts' being presented.
 
Seriously, they've said that the real application measure at 150Gbps, rather than the theoretical peak of 204, what more do you want?

If the chart measuring bandwidth over 10 minutes averages 100GB/s and peaks at 150GB/s once then they can say with complete validity that they measured 150GB/s in a real application.
 
If the chart measuring bandwidth over 10 minutes averages 100GB/s and peaks at 150GB/s once then they can say with complete validity that they measured 150GB/s in a real application.

But people would probably say, that 10 minute chart might be the best case 10 minutes, and the rest of the session run like crap, we want to see a 1 hour chart.

Here's a hint, it's so easy to write some code that does completely nothing and chew up the bandwidth, now what does it prove?
 
If the chart measuring bandwidth over 10 minutes averages 100GB/s and peaks at 150GB/s once then they can say with complete validity that they measured 150GB/s in a real application.
And it may have hit 200, so maybe we should be saying that their seriously downgrading their own number, misterxmedia and you seem to have same view on things, just from opposite ends.
 
Last edited by a moderator:
Aren't 8GB DDR3 unified memory? aren't 512MB on Xbox 360 an unified memory system?

Its unified memory in terms it both shared by the cpu and the gpu. Both its not unified in terms of a memory pool on the gpu handling memory accesses normally handled by the off DRAM. eSRAM introduces difference in terms of how XB1 utilizes off chip dram versus how the PS4 uses off chip dram. gDDR5 has to deal with post processing for the ps4 where XB1 seems to keep at least some of memory needs of post processing on chip.

The DDR3 will deal with a subset of the memory accesses dealt with by the gDDR5 on the PS4. Without a full understanding of how eSRAM will services the processors, you can't readily compare the two system.
 
Last edited by a moderator:
What interests me about the numbers, including the 133-140 GB/s realized from blending is that the Orbis thread did bring up the utilization numbers for discrete GPUs, and for ROP operations their memory subsystems are tuned for higher utilization than that.

It's curious because it would seem like the higher potential speeds on-die and the existence of separate data paths would make the job easier.

Both DRAM and the eSRAM are banked, although we don't know the full nature of the eSRAM's subdivisions, so why the difference if both controller types are able to reorder accesses as needed?
Could this be a sign of a play for latency on the part of the eSRAM controllers by reducing the coalescing and reordering functions GPU controllers perform?
 
What?

176 GBs is real world too.

The 176GBs is a theoretical figure, not real world. Real world is less. An interesting article is here http://archive.arstechnica.com/paedia/b/bandwidth-latency/bandwidth-latency-1.html
Which explains a lot, though despite the authors best efforts, it's not for the feint hearted!

Ms engineers have stated that whilst monitoring real games running they have measured real world 150MB/s from esram and 50GB/s from ddr3. They can be added together. So real world the x1 has been measured running real code at 200Gb/s. The relevant quotes have been included here a page or so ago.

The rest of your points are fair enough, we don't know all the ins and outs, but there would have to be a proper curve ball to throw these figures right out.

I am interested in your comment about thousands of gpu threads vying for the bandwidth with the cpu. Because to my mind the more that happens, the more contention you get, which will not be good for overall bandwidth utilisation or reducing cpu stalls.
 
I guess 150+GB/s can be achieved in some cases.

Exactly, to think that is the mean bandwidth over the course of some macroscopic time frame is ridiculous. 99% of the data is in the DDR3 pool, but magically the eSRAM is going to be read/writing 150GB/s full time? Remember the original DF article about the eSRAM with the "holes"? They actually gave an example of how they achieved that near max bandwidth number, some FP16 blend operation I think. Are we to expect the eSRAM is always doing some operation that maxes its bandwidth?
 
Exactly, to think that is the mean bandwidth over the course of some macroscopic time frame is ridiculous. 99% of the data is in the DDR3 pool, but magically the eSRAM is going to be read/writing 150GB/s full time? Remember the original DF article about the eSRAM with the "holes"? They actually gave an example of how they achieved that near max bandwidth number, some FP16 blend operation I think. Are we to expect the eSRAM is always doing some operation that maxes its bandwidth?

What's important is what the maximum bandwidth is so that it doesn't become a bottleneck during peak demands. It wont run at 150GB/s all the time because it would never have to.
 
But people would probably say, that 10 minute chart might be the best case 10 minutes, and the rest of the session run like crap, we want to see a 1 hour chart.

Here's a hint, it's so easy to write some code that does completely nothing and chew up the bandwidth, now what does it prove?

There's a world of difference between saying something like "over a typical 10, 20, 30, whatever, minute gaming session in a real game we hit an average of 150GB/s utilization" and saying "we've measured 150GB/s in a real game". They are two completely different things to say. While one can mean the same as the other, it can also mean something very different. People seem to be assuming Microsofts statement means whatever they want it to mean rather than accepting the reality that it doesn't really tell us an awful lot.
 
And it may have hit 200, so maybe we should be saying that their seriously downgrading their own number, misterxmedia and you seem to have same view on things, just from opposite ends.

I haven't made a judgement either way as yet. My post clearly stated I don't know what the real interpretation of Microsofts statement is, simply that there was more than 1 interpretation. The ones with the agenda are those insisting on the one specific interpretation that best suits their world view.
 
There's a world of difference between saying something like "over a typical 10, 20, 30, whatever, minute gaming session in a real game we hit an average of 150GB/s utilization" and saying "we've measured 150GB/s in a real game". They are two completely different things to say. While one can mean the same as the other, it can also mean something very different. People seem to be assuming Microsofts statement means whatever they want it to mean rather than accepting the reality that it doesn't really tell us an awful lot.

Again, if you are looking for clues to fit your conclusion, you are always going to find them.

Here's another hint, using more BW is not better, it's actually worse. By suggesting that the 150Gbps is somehow manufactured stats, what you are saying is that in real life, the average BW utilized is actually less, meaning that there are even more bandwidth that are available to be used. ;)
 
What's important is what the maximum bandwidth is so that it doesn't become a bottleneck during peak demands. It wont run at 150GB/s all the time because it would never have to.

Well I"m not the one painting the XB1 as a "200GB/s" system. The system has 99% of the game data in 68GB/s memory with the eSRAM to assist as a separate, independent, small high speed pool. The aggregate system bandwidth is going to be difficult to characterize, but there must be a meaningful and honest way to do it other than "we measured..".
 
This thread is making me laugh and cry at the same time. I wish there was a quick and easy way to filter out all the trolling because there's a lot of really good info, but one has to work really hard to find it.
 
Status
Not open for further replies.
Back
Top