Why would double buffering double it? Seems to me that the front-buffer could be kept around post-sample-resolve, and you don't need the z-buffer for the front buffer either. And MRT's wouldn't be double-buffered either. A 1280x720 2xMSAA backbuffer should take up about 15M with Z/Stencil. If you resolve the backbuffer and keep it around, you need only about 3M more. Maybe nAo can shed light on this, but I recall from years ago talking with an NVidia employee about how the G7x architecture is capable of custom MSAA resolves, it's just not exposed by any public API (to prove it, he implemented a custom Gamma resolve which I had proposed) This seems like it would be ideal in the deferred renderer case, since you can do it as a post-process.
So, the difference between the X360 and the RSX in this case seems to be 8M. All of the buffers take a total of 18M, but the X360 has 10M of EDRAM, leaving a difference of 8M, or 1.5% of total memory.
The amazing thing to me is, the X360 does have an advantage. It has unified shaders, a more functional GPU, vastly more framebuffer bandwidth, and an easier CPU programming model, plus excellent MS dev support, yet despite this, no XB360 developer seems to be pushing the X360 HW like is being done in the PS3. Could be the laziness of a large market with built in zillions of sales for top tier devs, but it seems devs are definately working on a time to market basis, pushing out wares quickly, rather than years of expensive tech development. The ubiquity of UE3 titles shows that.