As nao said what would you need eDRAM for on a typical rasterizer?
If you've got an IMR (heck, even a TBDR as I argued in the past, but probably to a lesser extend) then using eDRAM means you won't have to go off-chip as much, saving power. This makes it a viable architecture for IMR handhelds... And it's exactly what NV is doing there next round (and for, let us say, something else), but heh, I'm disgressing.
As for a desktop part where the power requirements from going off-chip don't matter as much - well, one advantage is that it might allow you to have cheaper RAM (or a less wide memory bus) for a given amount of performance. So it might give better performance/dollar for the consumer, and certainly would increase the ASPs of the IHV: instead of selling a $65 GPU with $35 of VRAM, they could sell a $90 GPU with $10 of VRAM. Clearly, that makes them more money.
As for the technical advantages - well, indeed, not that much. However, some eDRAM could also be used as on-chip buffer for the geometry shader (rather than just the framebuffer), potentially accelerating that nicely. And it might come in handy for GPGPU too as I argued in the past (for Z-RAM, but it's the same thing really; even more so with Hynix replacing DRAM with Z-RAM completely for commodity chips for example!)
Also, it is important to realize you don't need a truckload of eDRAM or to use tiling. If you're smart, you can use compression for what you're writing to eDRAM too. So, say that your framebuffer is 40MiB and you have ~4:1 color compression with MSAA. You could just use 10MiB of eDRAM, and in the worst case you're caching 1/4th of the full color information, saving 25% bandwidth. In the best case, 4:1 compression is achieved and everything fits into eDRAM, saving 100% framebuffer bandwidth.
eDRAM is a lot more attractive once you realize all past and current designs are awfully naive and you could be way smarter about it.