It's a good design feature for a GPU using eDRAM. The alternative is to sit the ROPs outside the eDRAM and have them share the same GPU<>eDRAM bus. I'd be surprised if MS don't go with the same design, but maybe there's a downside I'm unaware of? Only I'd expect the eDRAM to be advanced further to have full GPU access for read/write.
I expect MS to use eDRAM as a very high bandwidth victim cache for main memory. Possibly with controls to segment the eDRAM for various uses, ie. lock parts for texture caching, render targets, compute jobs etc.
That way you can allocate massive buffers for expensive AA (expensive in memory terms, like MSAA) in main memory but only have to store the actual fragments used in eDRAM.
Cheers