I doubt that would matter too much. That hasn't been an issue for 3D gaming after all. It would look worse than crap though...it'd be straight up weird since you'd see low res, blurry fidelity all over the place except for some circular region darting all over the screen to follow the player's eye.
I doubt the Tobii system they are tracking the eyes with there is using a depth camera anyhow. Seems like a strange approach to take in trying to track what are largely 2D eye movements. I don't see a reason why it couldn't work via Kinect 2.0 by just running image analysis on a zoomed in RGB feed, except for the fact that in dark conditions this would obviously not work.
Again though, the eye tracking angle here isn't relevant to my point. I'd really rather discuss ppl's ideas for implementing the display planes in interesting ways or the potential for performance gains there.
Supposedly the display planes stuff is hardware in Durango to do exactly that.
I think you misunderstand why I posted the foveated stuff. I posted it because in both that case and the display planes stuff we've heard about in Durango MS is leveraging 3 different layers that dynamically adjust their visual fidelity across a host of parameters to net major performance gains. At least in concept.
I didn't mean to suggest that the display planes stuff works via embedding rectangles or any other regular shapes within one another. In the foveated rendering setup that is only being done to replicate how the human eye works (it's actually more elliptical than straight up circular but they ignored that for the study).
I see no reason to think that the display planes in Durango would be limited to displaying rectangles. Otherwise it wouldn't make sense to have foreground/background distinctions as no modern 3D games have sharp rectangular boundaries on the screen separating foreground/background. Same goes for the HUD or even OS overlays. Those won't always be rectangles either. So we don't need to limit ourselves as you have suggested here. When these 3 distinct sets of imagery get composited/blended together they can then be upscaled to fit the tv as necessary.