Well, people managed to do incredible things with the PS2's 4MB of EDRAM, using it as a sort of manually managed texture cache - although games in that time tended to use a lot of tiling.
Still, it should be possible on the Wii U to work in a way like this:
- start rendering the frame, allocate say 16MB of texture memory, 8-10MB frame buffer, 6MB for shadows or something
- calculate shadow buffers for the ground
- upload textures for the ground, render ground polygons
- calculate shadows for vegetation
- upload textures for vegetation, render that
- calculate shadows for characters
- upload textures for character #1, render that etc. etc.
- upload textures for smoke / fire / dust effects, render that
I suppose there are state switches or such as the GPU processes the various objects in the scene anyway, so it should be possible to structure memory in a way that provides various caches that get refilled many times throughout the rendering of a single frame.
It would take a lot of painstaking optimization to split up the scene into chunks that will not overload these relatively small caches, and it could have a bad effect on efficiency and GPU utilization; but it could still be better than trying to read textures from the main RAM.
Or maybe it would work well with a virtual texturing approach, even if it's not completely unique texturing like Rage; the very reason to implement such a system is limited video memory. Carmack first wrote about it when we had 16-32MB video cards used at 1024*768 32 bit resolutions; and he specifically mentioned EDRAM. Granted, it was in March of 2000 (!) and a lot has happened in the 12 years that passed...