It's basically just the rate at which the ROPs can be fed. They used a wide enough bus to clock data to the ROPs at the rate the ROPs could use it.As I know anly beggers can be stored in EDRAM on XBox 360. Bandwith between main core and EDRAM core is 32 GB/s, so that is 16 GB/s each direction. If we devide that to 30 (for 30fps game) then that is almost 550 MB per frame. But why ATI made GPU with so much bandwith? I mean if game uses 1 tile that is 10 MB per frame, so 10 MB will be moved each direction. Can anyone explain please?
10MB as a "per frame" number is meaningless for this. ROPs often have to perform many operations on a given pixel, and they also often have to perform many operations that aren't in the final framebuffer (such as when rendering shadow maps).
PS2 is an extreme case: the graphics architecture of that console needed to be able to do a huge amount of operations on the framebuffer, because it couldn't do as many (or as complex) operations on a fragment upstream of the framebuffer. The ROPs were, essentially, performing some of the role that we would typically attribute to "pixel shaders."For PS2 that is understandable. All that massive bandwith was used for multipass, because data swould be moved many many times, and for textures, that also was used many many times. But I don't understand for what that bandwith was used on XBox 360.
But none of this really matters. You have to look at the context. It wasn't that expensive for the 360 to provide a high-throughput bus between Xenos and EEDRAM, it was natural in the architecture. Much in the same way that the neck on a can of Better Than Bullion allows you to empty most of the contents quickly should you choose to do so: it's not hard for them to design the can in that way, and there's no good reason to make it narrower, even if people will often just scoop out a teaspoon.