Metal_Spirit
Regular
You don't. You're either doing a 320-bit access or a 192-bit access depending on which client is asking for that particular segment of data and which pool it exists in.
I'm not getting your point.Can you explain a bit better?
Each module is connected on a 32 bits bus. All 10 modules accessing 1 GB allow for a 320 bit bus. The extra GB on the six 2 GB modules are accessed over the same 32 bits bus. But if you are accessing both pools the bandwidth is divided. As such the 32 bits are shared, so you will not get a 192 bits bus available to that memory.
Hope I'm making myself clear.
To get 560GB/s you would have to exclusively read from one pool at full load for an entire second. Where you are getting tripped up is you are using a metric of capacity to do work over a period of time and trying to apply it to moment to moment, cycle to cycle usage. If you were to precisely track the amount of data that was actually transferred over any given second when running a game, I'll bet it would be some lesser number than the theoretical max bandwidth. So whether this theoretical max number goes up or down with any particular usage pattern is irrelevant. What's relevant is whether this particular setup delivers sufficient available bandwidth to meet the needs of the system as it needs it.
Never questioned that.
But having 560 GB/s available or 292+168 or 244+366 it is not the same thing! Specially when bandwidth for each of the fast and slow memory can change at any moment