Understanding XB1's internal memory bandwidth *spawn

No I'm saying that reading from and writing to dram is the same 68GB whether that data is going to/coming from esram or the cpu or the hdd or "da cloud", You don't get to stand in the middle of the stream and add the flowrate in both directions.

He was talking about the ESRAM BW, not DDR3. You can count the ESRAM separately because it is independent.
 
No I'm saying that reading from and writing to dram is the same 68GB whether that data is going to/coming from esram or the cpu or the hdd or "da cloud", You don't get to stand in the middle of the stream and add the flowrate in both directions.

Why not? They are different memory pools and have their own bus.

DRAM->(68GBps)->GPU->(68GBps)->ESRAM
Sure, the data flow is at 68GBps, but it would consume 136GBps of BW to achieve that.

This is different than, say:
DRAM->(68GBps)->GPU and do nothing, which the flow rate is at 68GBps, but consumes only 68GBps of BW.

You are making the argument that these 2 scenarios consume the exact same amount of BW.
 
DDR Bandwidth is in transfers/sec, so read or write. Otherwise the bandwidth would be published as 136GB/s.


taisui said:
DRAM doesn't do simultaneously read AND write.
256b = 32B, at 2133Mhz = 68256MBps = ~68GBps.

Thank you for confirming my beliefs.

taisui said:
The DRAM and the ESRAM are different memory pools, they have their own pipes and own BW. It is also stated that the table is for a copy operation to demonstrate max BW.

DRAM to DRAM, half read, half write, gives your 34GBps R and 34 GBps W, at 68GBps total.

ESRAM to ESRAM, (assuming the early leak doesn't include simultaneous R/W), 51GBps each for R and W, total 102GBps.

ESRAM to/from DRAM, limited by DRAM max BW 68GBps, so 68GBps on the DRAM, 68GBps on the ESRAM, gives you 136GBps.

It doesn't matter. When explicitly describing the bandwidth available between eSRAM and DRAM, the DDR3 bandwidth is the limiting factor.

68GBs of reads or 68GBs of writes from eSRAM to DRAM is an either/or proposition so summing up those bandwidths is nonsensical.

DRAM to DRAM is accommodating the 68 GBs with half the bandwidth utilized by writes and half utilized by reads. 136 GBs of bandwidth using DRAM requires both reads and writes to independently utilized the entire bandwidth allowed by DRAM simultaneously. A 512 bit interface and 2133 Mhz DDR3 could drive reads and writes bandwidths at a total of 136 GBs, a 256 bit interface cannot. The DDR3 can provide 68 GBs bandwidth of reads or writes not 68 GBs of reads and 68 GBs writes.

In fact, if MS's 204.8 bandwidth figure between the gpu and eSRAM were derived in such a fashion, we all here and any of our console savvy, technical enlightened mothers would pitch a fit and go "GTFOH".
 
Last edited by a moderator:
136 GBs of bandwidth using DRAM requires both reads and writes to independently utilized the entire bandwidth allowed by DRAM simultaneously. A 512 bit interface and 2133 Mhz DDR3 could drive reads and writes bandwidths at a total of 136 GBs, a 256 bit interface cannot. The DDR3 can provide 68 GBs bandwidth of reads or writes not 68 GBs of reads and 68 GB writes simultaneously.

The ESRAM BW is separate from the DRAM BW. They are physically separated blocks of chips and have their own dedicated interface to the GPU.

The 136 GBps figure comes from 68GBps READ from the DRAM + the 68GBps WRITE to the ESRAM. You are thinking (incorrectly) that the ESRAM BW is somehow part of the DRAM BW.

Judging on the GAF Q&A, it seems to have confirmed that the ESRAM is a 2-port design, hence max BW at 218, not 204GBps.
 
Why not? They are different memory pools and have their own bus.

DRAM->(68GBps)->GPU->(68GBps)->ESRAM
Sure, the data flow is at 68GBps, but it would consume 136GBps of BW to achieve that.

This is different than, say:
DRAM->(68GBps)->GPU and do nothing, which the flow rate is at 68GBps, but consumes only 68GBps of BW.

You are making the argument that these 2 scenarios consume the exact same amount of BW.

Just because you drove 68 mph to McDonalds and then 68 mph to from McDonalds to your job doesn't mean you drove 136 mph.
 
Just because you drove 68 mph to McDonalds and then 68 mph to from McDonalds to your job doesn't mean you drove 136 mph.

That's not bandwidth, however. No one's claiming the propagation speed is doubled.
In addition, the trip to the destination and the trip back are effectively happening simultaneously for the copy operation for the purposes of bandwidth calculation.
It's not physically meaningful to be discussing driving to and from a place at the same time.

A more appropriate version of that scenario is counting the number of people being carried on this trip and back.
If it's just the driver, the number of people carried per unit of time in a driving scenario is one person per time unit.
If you create the strange scenario where the drive to and from happen at the same time, it's two people.
 
Just because you drove 68 mph to McDonalds and then 68 mph to from McDonalds to your job doesn't mean you drove 136 mph.

So same argument, how exactly is me drive to McDonalds at 34 mph and coming back at 34 mph become 68 mph then?

(referring to the DRAM->DRAM scenario, 34GBps Read, 34GBps Write, total 68GBps, you didn't seem to have a problem with that...)

dobwal; said:
DRAM to DRAM is accommodating the 68 GBs with half the bandwidth utilized by writes and half utilized by reads.
 
The ESRAM BW is separate from the DRAM BW. They are physically separated blocks of chips and have their own dedicated interface to the GPU.

The 136 GBps figure comes from 68GBps READ from the DRAM + the 68GBps WRITE to the ESRAM. You are thinking (incorrectly) that the ESRAM BW is somehow part of the DRAM BW.

Judging on the GAF Q&A, it seems to have confirmed that the ESRAM is a 2-port design, hence max BW at 218, not 204GBps.

Why must the data from eSRAM and meant for DRAM transverse the gpu? What are the DMEs for?
 
So same argument, how exactly is me drive to McDonalds at 34 mph and coming back at 36 mph become 68 mph then?

(referring to the DRAM->DRAM scenario, 34GBps Read, 34GBps Write, total 68GBps, you didn't seem to have a problem with that...)
Doesn't this metaphor depends on how many bigmacs you can carry in your car?
 
68 Bigmacs per-second en-route to McDonalds, 68 Bigmacs per-second leaving with customers, requires 136 Bigmacs per-second of total lardwidth at McDonalds.
 
So same argument, how exactly is me drive to McDonalds at 34 mph and coming back at 34 mph become 68 mph then?

(referring to the DRAM->DRAM scenario, 34GBps Read, 34GBps Write, total 68GBps, you didn't seem to have a problem with that...)

No its like ordering ready made food for delivery. The delivery driver can drive at 34 mph from point A to B to accommodate the time you need to drive at 68 mph from point B to A then back from point A to B if you chosen take out instead.

DRAM to DRAM requires data be read from one memory and written into the other. So in simple terms you are reading in one step and writing in another which mean two step to perform the operation. So since reads and writes are alternating, both reading and writing are operating at half rates. 136 GBs doesn't infer that scenario it infers that reads and writes are occuring during the same steps and each and every step, which the DRAM doesn't accommodate.
 
Correct, so I'm assuming the GPU is reading from the ESRAM pool at 68 GB/s (because it can't transfer more data than that per second through the DRAM bus to main memory) and the GPU is also then writing to the DRAM pool at 68 GB/s for a total of 136 GB/s consumed over 2 buses. Is that not the correct interpretation of the table?

the esram reads and writes are higher than 68gb/s up to 109 gb/s in each direction. that's why your numbers are off.
 
No its like ordering ready made food for delivery. The delivery driver can drive at 34 mph from point A to B to accommodate the time you need to drive at 68 mph from point B to A then back from point A to B if you chosen take out instead.

DRAM to DRAM requires data be read from one memory and written into the other. So in simple terms you are reading in one step and writing in another which mean two step to perform the operation. So since reads and writes are alternating, both reading and writing are operating at half rates. 136 GBs doesn't infer that scenario it infers that reads and writes are occuring during the same steps and each and every step, which the DRAM doesn't accommodate.

I don't know if you are trying too hard to disprove with bad analogy or you just didn't understand.
("Miles"PH is a velocity, hence the right analogy would actually be latency. Bandwidth would be how much food that I ordered from a takeout, i.e. lardwidth)

The 136GBps is not DRAM->DRAM in simultaneous R/W.

It's for DRAM->ESRAM, or ESRAM->DRAM, and because you can't write more data than you can read in a copy operation, the transfer rate is bound by the DRAM max BW, which is 68GBps, hence 68GBps * 2 = 136GBps.
 
bandwidth is maximum capacity of the restaurant at any given time.

If a restaurant has a drive through window then it has different service options at each.

PS4 has a capacity of 176 patrons

XB1 can only serve 68 patrons in the store while it can serve 109 patrons each at the drive up order and the pick up windows (you called ahead).

It can theoretically service a total of 286 persons at any time and users at the pickup window and drive through windows need not go through the store.

However, parking is a logistical nightmare so its rare that they will ever actually service all 286 people. The average service count is 68 customers inside plus 140 users at the drive through and pick up windows.
 
Back
Top