I can't speak to the rest of your post but I can categorically state that the display planes have nothing to with 4K and the problem of nasty cheap monitor ASICs. That problem is going away over time, there are already high end 4K screens with 4k @ 60hz support, and isn't relevant for the consumer market anyway. The consumer HDMI 1.4 standard only supports 3840×2160 (4K Ultra HD) at 24 Hz/25 Hz/30 Hz or 4096×2160 at 24 Hz (thanks Wikipedia!). So your PS4 and your tv are just not fit for 4K gaming right now.
For 4k @ 60hz you need a good monitor w/DisplayPort 1.2 and you can have a single rendering surface that is your entire display. As of now we deal with the tiling by faking 2 displays using a dual monitor setup that is streamed over a DisplayPort MST which is a nightmare as no-one remembered to add a 'hint bit' to MST that says 'this display is upper left' so on reboot a MST array is likely to spray your desktops everywhere, very annoyi
Even upscaling to 4K on GPU have noting to do with display planes?!
===============
How big is the chance of that 2MB (or 4MB?) of eSRAM between two CPU clusters being for CPU-Assisted GPGPU? I accidently found this article which AMD sponsored and co-authored:
http://www.extremetech.com/computin...md-cpu-performance-by-20-without-overclockingUpdated @ 04:11 Some further clarification: Basically, the research paper is a bit cryptic. It seems the engineers wrote some real code, but executed it on a simulated AMD CPU with L3 cache (i.e. probably Trinity). It does seem like their working is correct. In other words, this is still a good example of the speed-ups that heterogeneous systems will bring… in a year or two.
And then it reminds me of this leak that didn't come true:
Is it possible?!