Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
That is not very meaningful IMO. Is that an average or a particular operation? We won't know these numbers until developers start leaking real world experiences with real engines. Now we just have some MS PR.

That is the real world number for read plus write functionality. Read or write functionality is 109gb/s.

Add either of those to 68gb/s
 
Not getting into direct comparisons, the answer is going to be more naunced than all this.
Quoting a real-world average utilization factor--even assuming complete accuracy-- isn't always helpful because that average is mashing together a wide array of different scenarios.

More enlightening would be to see the numbers drawn for specific activities, which can hit the strengths and weaknesses of the system.
Even better would be to see if a game using these activities together is benchmarked, or if the engine devs present their results and analysis.

The platforms peak differently in different scenarios to different degrees. This is one source of complication.
Unless you know the relative importance of each bottleneck for a specific game, you can't say when those peaks matter or how much. This compounds the difficulty in saying something definitive outside of the refrain that we'll need to wait until the rubber hits the road.
This in turn is only related to the performance outcome--and even then possibly only in the context of a portion of a complex system, with all other factors like features, power, enconomics, developer politics, and so on considered equal or irrelevant.
 
Devs CAN try to get to that number or they may not. It's all about the devs and how
"lazy"
they are :LOL: How much MS does to make that CAN more of a WILL is a different story.

I like how the devs would be very lazy on one system but will sacrifice sleep to write the best code they can on the other.
 
Come on guys, we can have better conducive dialogue without referring to things such as PS4 is not balanced, X1 suddenly being the bandwidth monster, MS is all PR, etc etc. Downplaying X1's or PS4's strengths while up-playing the competition.

Let's leave PS4 out of the thread.
 
Devs CAN try to get to that number or they may not. It's all about the devs and how
"lazy"
they are :LOL: How much MS does to make that CAN more of a WILL is a different story.

And then if you say what can you achieve out of an application - we've measured about 140-150GB/s for ESRAM.
"That's real code running. That's not some diagnostic or some simulation case or something like that. That is real code that is running at that bandwidth. You can add that to the external memory and say that that probably achieves in similar conditions 50-55GB/s and add those two together you're getting in the order of 200GB/s across the main memory and internally."
So 140GB-150GB is a realistic target and DDR3 bandwidth can really be added on top?
"Yes. That's been measured."

I'm not sure what exactly people want to hear from Microsoft at this point? They're giving actual numbers running actual games. They've explained every question about the RAM and the bandwidth in more excruciating detail then the public ever gets.
 
That is not very meaningful IMO. Is that an average or a particular operation? We won't know these numbers until developers start leaking real world experiences with real engines. Now we just have some MS PR.

From the DF article:

"[ESRAM has four memory controllers and each lane] is 256-bit making up a total of 1024 bits and that in each direction. 1024 bits for write will give you a max of 109GB/s and then there's separate read paths again running at peak would give you 109GB/s.



"What is the equivalent bandwidth of the ESRAM if you were doing the same kind of accounting that you do for external memory? With DDR3 you pretty much take the number of bits on the interface, multiply by the speed and that's how you get 68GB/s. That equivalent on ESRAM would be 218GB/s. However just like main memory, it's rare to be able to achieve that over long periods of time so typically an external memory interface you run at 70-80 per cent efficiency.



"The same discussion with ESRAM as well - the 204GB/s number that was presented at Hot Chips is taking known limitations of the logic around the ESRAM into account. You can't sustain writes for absolutely every single cycle. The writes is known to insert a bubble [a dead cycle] occasionally... one out of every eight cycles is a bubble so that's how you get the combined 204GB/s as the raw peak that we can really achieve over the ESRAM. And then if you say what can you achieve out of an application - we've measured about 140-150GB/s for ESRAM.


"That's real code running. That's not some diagnostic or some simulation case or something like that. That is real code that is running at that bandwidth. You can add that to the external memory and say that that probably achieves in similar conditions 50-55GB/s and add those two together you're getting in the order of 200GB/s across the main memory and internally."


So 140GB-150GB is a realistic target and DDR3 bandwidth can really be added on top?


"Yes. That's been measured."

I guess 150+GB/s can be achieved in some cases.
 
We have all read all that, I'm not sure why it is gospel though. MS has measured something, great. How often and what particular circumstances? Do you think they are read/writing to the eSRAM 100% of the time at 150GB/s?

I've measured 42m/g with my car, but that doesn't tell the whole story.
 
Devs CAN try to get to that number or they may not. It's all about the devs and how
"lazy"
they are :LOL: How much MS does to make that CAN more of a WILL is a different story.
I though that was what they were getting. Why mention ~150GB/s as typical usage and not just leave the peak bandwidth figure out there?
 
We have all read all that, I'm not sure why it is gospel though. MS has measured something, great. How often and what particular circumstances? Do you think they are read/writing to the eSRAM 100% of the time at 150GB/s?

I've measured 42m/g with my car, but that doesn't tell the whole story.

That's pretty forthcoming of them don't you think? More info than we normally get. There is another followup article as well.
 
I'm not sure what exactly people want to hear from Microsoft at this point? They're giving actual numbers running actual games. They've explained every question about the RAM and the bandwidth in more excruciating detail then the public ever gets.


So devs are getting 150 routinely as a matter of course like say a GPU using GDDR5 or some other unified memory system ? Zounds.

So it is not a matter of WILL then, Any dev NOT getting 150+ is choosing NOT to:LOL:
 
We have all read all that, I'm not sure why it is gospel though. MS has measured something, great. How often and what particular circumstances? Do you think they are read/writing to the eSRAM 100% of the time at 150GB/s?

I've measured 42m/g with my car, but that doesn't tell the whole story.

Seriously just take the 109Gbps + 68Gbps and believe whatever you like.
But then you probably would say that you can't add the BW together. ;)
 
We have all read all that, I'm not sure why it is gospel though. MS has measured something, great. How often and what particular circumstances? Do you think they are read/writing to the eSRAM 100% of the time at 150GB/s?

I've measured 42m/g with my car, but that doesn't tell the whole story.

What's your point exactly, does any memory subsystem read/write to its memory 100% at its peak or "measured peak"?

What else do we have besides whats been actually measured? So your point is 68GB/s, 218GB/s, 140-150GB/s and 176GB/s are all useless numbers and give us no indication of the memory subsystem performance in the new consoles? Should we all just move on and not discuss memory bandwidth at all for a year or so until developers can comfortably tell us what they are seeing?
 
People complain that all they know about is peak figures therefore MS PR & FUD.
They get told how the peak is calculated and what actual measured usage is. People complain MS PR & FUD.

Guess they have to release every single detail possible, but I have a feeling it will still simply MS PR & FUD.

Details could be released by 3rd party developers and it will be MS money hat if it doesn't crap on X1.
 
People complain that all they know about is peak figures therefore MS PR & FUD.
They get told how the peak is calculated and what actual measured usage is. People complain MS PR & FUD.

Guess they have to release every single detail possible, but I have a feeling it will still simply MS PR & FUD.

Details could be released by 3rd party developers and it will be MS money hat if it doesn't crap on X1.

Then this thread is nothing, why can't we take the official information?
 
Then this thread is nothing, why can't we take the official information?
I don't think things should be taken as gospel, but unless your willing to have a reasonable discussion without dismissing what we have been told out of hand, then I'm not sure what the point is.

Thread seems to have gone to pot very quickly again :LOL:
 
My brain feels dizzy after so many bandwidth numbers being displayed in your posts. I evidently would like to mention that I trust the engineers who worked on the hardware until proven otherwise. bkilian for instance makes me feel there is hope, so yeah.

Nevermind. I will take this opportunity to mention that there is a countdown (40 minutes left!!) on Youtube related to the AMD conference and their GPU 14 Showcase.

http://www.youtube.com/watch?v=bHfmM6QYWNM

They say there is going to be some announcement related to the next gen consoles. Hype!

http://tech2.in.com/news/graphics-cards/amds-gpu14-tech-day-to-showcase-nextgen-gpus-and-more/915716
 
Status
Not open for further replies.
Back
Top