R600 nugget...

Look at the bandwidth and memory configurations in the Xbox 360. The GPU would be completly memory starved with just the shared 128 bit (I guess two 64-bit channels?) DDR memory. The combined memory (without even taking into account the CPU) bandwidth of the EDRAM and the shared memory isn't that high when compared with the bandwidth in current, and even more, future graphic cards. All top end PC graphic cards implement four 64-bit DDR channels not two so at the same memory frequency they have double the bandwidth. Basically with the EDRAM the GPU would had had as much bandwidth as a middle to low end PC graphic card. That is the first reason because using a separate memory (EDRAM or not) is a good approach for the Xbox 360 GPU. The second reason has already been mentioned and is that video console framebuffer sizes are smaller (not by much anymore unless 2048+ monitors become cheap soon) than what is expected to support a PC graphic card and you require less EDRAM if you want to put the framebuffer there.

Latency isn't an issue as long as the access patern is predictable that is for what GPUs are designed. I would guess that if the EDRAM is implemented as a second chip the latencies should be quite similar. More likely having separated buses and petition queues for both kind of memories and data types (framebuffer accesses wouldn't conflict with texture accesses for example) would make more for reducing the latency than any difference between both types of memory.
 
Back
Top