I first read Brad's post as you seem to have, but he has some legitimate points that caused me to terminate my reply. If the background is decided by a depth test, and everything beyond a certain distance is drawn in the low-res buffer, then you'll have a blurry to sharp transition that'll be abrupt at the transition threshold, if implemented in that way.
You already have that in games anyways. Also, the backgrounds are still being scaled PRIOR to compositing them. See the link I posted.
Here's an example of what I mean when I say games do that anyhow:
http://s.pro-gmedia.com/videogamer/...hadow_fall/screens/killzone_shadow_fall_3.jpg
It's a bit awkward as the camera is at a tilt, but you can easily see the very definite demarcation between foreground/background via notably differing image clarity just behind the debris there. We aren't talking about a scenario where you have nothing to lay on top of the background, thus exposing the demarcation line (that'd be lacking a foreground in which case that part of the screen would just be rendered in the foreground plane as is). If devs are already doing DoF stuff anyhow, then lowering the res shouldn't be problematic for aliasing as those pixels, even if they were visible, would be blurred over anyhow.
If the threshold is far enough, like skybox or a mile away, then it won't matter, but if the distance is set very far than the savings will be minimal. Of course, one could render foreground with a transparency mask covering a range of transition depth, eliminating the hard transition.
This is kinda what I was getting at, depending on the scene. I'd actually be interested in knowing what kinda resolution drops one could get away with on backgrounds as a function of distance from the camera before it looks bad. I'd imagine you could REALLY cull some pixels/processing potentially.
If I were designing the system (and as such, whatever I suggest is not going to be how it is, because my clever ideas never match with reality
), I'd have a UI API with vector and bitmap based UI components. Rather than devs having to make their own, they'd just plug into the OSUI layer. The OS would update the UI layer 60 fps regardless, allowing for smooth animations and transitions.
Eh, you'd be wasting those small resources on that kinda framerate most likely. HUD's can be low fps without issue. You could push a tad further to max out that 10% GPU OS reserve probably.
My own idealization that I'd like to see devs play with is for the gun, vehicle, arm/hand, etc in an FPS. Call that part of the "HUD" plane and see how that works. OR do the more distant stuff like you mentioned (for relatively minimal savings) OR do something less distant like the pic I posted of KZ. It can easily be on a case by case basis too (hell, even frame by frame basis if need be). Of course then it comes down to dev input.
My only concern with what you suggest by putting HUD and OS together is what happens when you need to use that OS overlay for something like say, Skype? Or better yet, snapping the Twitch app? You still have the game running in 75% of the screen, but what happens to its HUD now that the app is running on those OS plane's GPU resources?
I do like the concept of customized game HUD's though, and maybe that can be a more common thing on X1 now as is, but it may go against the theme MS has put forward regarding running apps side by side with games. Then again, perhaps devs can run that as a companion app or something? Seems a bit convoluted to me.