@ MD,
That is how most are understanding it.
@ Shifty, while it may not be "system bandwidth" it is very legitimate to take this bandwidth into consideration. eg. on the RSX the backbuffer WILL be using the system memory's bandwidth, while on the Xbox 360 the eDRAM isolates all the high bandwidth back buffer (AA samples, Z, alphas, blending) into the eDRAM.
On the RSX if you use 10GB/s for the backbuffer on the GDDR3 memory (lets ignore the XDR for the moment to make this easy), you have done two things: first is you have almost cut your bandwidth in half for the GDDR3, the other thing is you are running the risk of treating the GDDR3 pool as a framebuffer. If you use ~60MB of the GDDR's space but most of the bandwidth for a framebuffer--well, that is one expensive framebuffer!
The eDRAM isolates all those high bandwidth tasks away form the general memory pool and since the bandwidth wont be wasted on the backbuffer you have more access to the memory contents (instead of a 256MB pool of memory that is under utlized because you are saturating the BW with your buffer).
Basically, the eDRAM gives real savings. The bandwidth the eDRAM uses is real bandwidth being used on the PS3.
That being said, I agree that the entire 256GB/s total should not have been added arbitrarily. Instead, I would have liked to have seen it compare the typical needs of 1080i @ 60fps with HDR, 4x AA, and so forth. Look at what the bandwidth needs would be and compare them side by side. That would have been fair in my opinion.
I am not too worried about the PS3 though. The RSX has almost 38GB/s of bandwidth to use and access to all 512MB of memory. While there may need to be some tradeoffs at times (4x AA with HDR at 1080p seems like a system killer to me) overall I do not see the PS3 having issues. The eDRAM on the Xbox 360 was a way to keep the UMA free from the backbuffer bandwidth needs (and thus they were able to go with cheaper 128bit memory) and a neat way to give some nice effects, like 4x AA, almost free.
Different methods, different philosophies, similar results. But the bandwidth the eDRAM saves is VERY real. So both sides are wrong: Sony is wrong for just counting the bandwidth as if it is apples-to-apples; MS was wrong for adding the BW together.
Instead, a fair and honest way to really look at it, would have been to look at what the backbuffer savings are. Whether it be 1GB/s or 30GB/s does not really matter, but knowing what that savings is tells us a lot more about the system bandwidth in general.
(EDIT: Just an example of why I think that is more fair: If a "game" uses 15GB/s of backbuffer bandwidth per second, that leaves the RSX with 23GB/s of bandwidth left. If the R500 can do that 15GB/s of backbuffer in the eDRAM, that leaves 23GB/s also [real stupid numbers because of the CPU pools are different... but ignore that fror this stupid example]. In the scenario I just gave both systems are left with the same amount of remaining system bandwidth. The question of course is a game-by-game one, but games with features that use require a large amount of bandwidth for the framebuffer will benefit from the eDRAM. So the bandwidth and savings are real... the question is how to do a fair apples-to-apples comparison. So far I have not seen one from either side).