Dynamic resolution on XB1

Status
Not open for further replies.

Jay

Veteran
Given the following from XB1 architect:
http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview
We have two independent layers we can give to the titles where one can be 3D content, one can be the HUD. We have a higher quality scaler than we had on Xbox 360. What this does is that we actually allow you to change the scaler parameters on a frame-by-frame basis. I talked about CPU glitches causing frame glitches... GPU workloads tend to be more coherent frame to frame. There doesn't tend to be big spikes like you get on the CPU and so you can adapt to that.
What we're seeing in titles is adopting the notion of dynamic resolution scaling to avoid glitching frame-rate. As they start getting into an area where they're starting to hit on the margin there where they could potentially go over their frame budget, they could start dynamically scaling back on resolution and they can keep their HUD in terms of true resolution and the 3D content is squeezing. Again, from my aspect as a gamer I'd rather have a consistent frame-rate and some squeezing on the number of pixels than have those frame-rate glitches.
Prior to launch I was interested in this aspect of the XB1 and was looking forward to its use and the impact on fps and image quality.
I don't believe any games have currently used it and I am curious as to why.
Couple reasons I could think of are:
1. API/SDK support hasn't been available (for scaler)
2. Engines need to be fundamentally designed from scratch to make use of it
3. MS was simply wrong in what they thought they were seeing in terms of the direction of engine design
4. Something broken in the hardware scaler

Also what are the pros and cons for using dynamic resolution, especially if it has supplementary hardware support in the scaler.
It was pretty interesting that MS needed to make the change to the sharpening filter on the scaler when I thought it would be fully under dev control.
 
Dynamic resolution is like the holy grail for the Xbox One, but very few people aside from Carmack are going to use it.

Is it that hard to implement? It seems to be, as only geniuses like Carmack know how to use it.

But all games should run at 60 fps sacrificing 1080p when necessary and people wouldn't even notice at all.

I just hope that Halo 5 will use dynamic resolution.
 
Mate I dont know if it was you, but anyways someones bleated here about this
being the worst gen ever
due to the framerates which ignores the reality that framerates in general are far better than they were last gen, apart from Dead Rising 3 name another title with bad framerates?
Now why weren't you not complaining last gen?
 
Dynamic resolution is like the holy grail for the Xbox One, but very few people aside from Carmack are going to use it.

Is it that hard to implement? It seems to be, as only geniuses like Carmack know how to use it.

It can't be that difficult as our very own Graham made use of it within a very popular PS Vita game. :LOL:
 
Given the following from XB1 architect:
http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview
Prior to launch I was interested in this aspect of the XB1 and was looking forward to its use and the impact on fps and image quality.
I don't believe any games have currently used it and I am curious as to why.
Couple reasons I could think of are:
1. API/SDK support hasn't been available (for scaler)
2. Engines need to be fundamentally designed from scratch to make use of it
3. MS was simply wrong in what they thought they were seeing in terms of the direction of engine design
4. Something broken in the hardware scaler

Also what are the pros and cons for using dynamic resolution, especially if it has supplementary hardware support in the scaler.
It was pretty interesting that MS needed to make the change to the sharpening filter on the scaler when I thought it would be fully under dev control.

I think Wolfenstein the New Order also did this IIRC?
 
Mate I dont know if it was you, but anyways someones bleated here about this due to the framerates which ignores the reality that framerates in general are far better than they were last gen, apart from Dead Rising 3 name another title with bad framerates?
Now why weren't you not complaining last gen?
I'm not sure if your talking about me or Cyan, I've never said anything of the sort.
This wasn't meant as some sort of criticism to current frame rates either.
Using dynamic resolution you could possibly increase the standard/default resolution, or upgrade the graphics, or simply smooth out the framerate.
Currently the engine has to make sure the min framerate doesn't tank too low, therefore the average framerate could be a lot higher than the lock you are aiming for.
Incorporating dynamic res, the min framerate could be higher at the cost of resolution, which may be a very reasonable trade off.
Also this is about what the XB1 architect said and why we haven't seen it used as of yet.
don't forget the dynamic frame rate too :mrgreen:
I believe the most well known implementation is in wipeout.
So Graham, do you have any input into why we haven't seen it used more often, and any insight on my original post? Your in a good position to give us (well me) some much needed insight :D
 
Last edited by a moderator:
So Graham, do you have any input into why we haven't seen it used more often, and any insight on my original post? Your in a good position to give us (well me) some much needed insight :D

Well. IMO what I understand of the 'display planes' feature it isn't really going to be any practical use, it feels to me like it's a 'because we could' feature.
All games render the UI over the output from the game scene render, and if there is a resolution difference, it just means an intermediate upscale. It's pretty trivial/cheap; on modern hardware it's going to be a tiny fraction of your frame time. Having the 'display planes', to me, just means the same thing is being done by the system UI instead of the game. I'm not aware of any advantage other than the system's upscale shader gets used (arguably a disadvantage). The system UI isn't free obviously though - and neither will be compositing the display planes and using an upscale filter. It's trivial, sure, but I'd rather just leave it up to the developer as that's the entirely standard way of doing things for basically every platform ever.

As for dynamic res itself, the problem isn't exactly trivial. The first part is simply determining that you need to lower resolution - which can only be done in a reactionary way, and further more determining how much to lower resolution and when to raise it back up. (you also have to make up for the slow frame that indicated reducing res was required)
It's not a trivial problem because it doesn't have a correct answer - you can only guess what a 25% drop in resolution will do to the render time of the following frame (it almost certainly won't make it exactly 25% faster...)

Secondly, and more significantly, dynamic res drops are best performed horizontally. This means a changing aspect ratio, which has potentially huge knock on effects through the renderer (more so than just resolution). Consider all the post process effects that might look different at different resolutions or different aspect ratios (hint, basically all of them).

So in KZ:M only the main scene render was scaled dynamically, but even that wasn't trivial. The downscale of the main scene/depth render to quarter res (most of the post chain was quarter res or lower) had to always reduce to a fixed resolution regardless of the main scene dynamic res (and it had to be precise, not blurring or missing samples), which meant (in the worst case) it had to downscale 840x544 to 240x136 which is a ratio of 3.5 on the horizontal axis - a total bastard to do efficiently for numerous reasons.

Finally, deep in the post chain, an approximate luminance difference was computed between the current and previous frame (at ultra low res). This was used to determine how much the image was changing - so if the image wasn't changing much (eg, the camera wasn't moving) it allowed the frame rate to drop - but during high motion it favored resolution drops. Hence dynamic frame rate.
 
Dynamic scaling is hard because computing the render time for an arbitrary scene ahead of time is hard.

Usually, the only precise way of knowing how long a frame will take to render is to actually render it and by then it's too late. You have to have some idea of what scale to use beforehand.

It might be possible to come up with a good heuristic the gives a scale by running a number of experiments and seeming what most determines the render time and developing a formula from that. For cut-scenes, exact values could be computed ahead of time.

The scaling must also be somewhat conservative since it would be a terrible error to down scale to the point where the frame rate could exceed 60fps.
 
Dynamic res on the X1 would have the added benefit of allowing buffers to retreat from partial DDR3 residency and fall back entirely into the massively faster esram for a solid baseline, and doing this might also allow for CPU bottlenecks, caused by the main memory contention with the GPU, to be alleviated.

The benefits of dynamic res on X1 would appear to be greater in theory than for PS4.

Secondly, and more significantly, dynamic res drops are best performed horizontally.

Why is this?

I understand that linear upscaling might be faster, and that vertical res is slightly more valuable in terms of perceived resolution, but given the problems with changing ratio and wanting scaling blur this is consistent and uniform in both directions, why limit yourself to horizontal?

It might be possible to come up with a good heuristic the gives a scale by running a number of experiments and seeming what most determines the render time and developing a formula from that.

I've read proposals that talk of doing exactly this.

Effects such as alpha from explosions tend to spread to cover more of the screen, additional light sources (should) have a fairly predictable impact (worst case, at least) and so time to render last frame + some heuristic value should give a pretty reliable indicator of where to target for your next frame.

Plus there's always the option of tearing in the overscan area of the next frame if you were a little too conservative on the last. The point at which tearing too place could also be used to alter the target resolution of the next.

The scaling must also be somewhat conservative since it would be a terrible error to down scale to the point where the frame rate could exceed 60fps.

How so? Couldn't you catch cases where frame rate might exceed 60 fps and cap, like on a 'normal' game?
 
In the XB1 ... a perfect example of the display planes in action, I believe, is

Display plane 1 - GAME
Display plane 2 - Snapped App (to the right of the game)
Display plane 3 - Both the Game & App (display planes 1 & 2) sitting in the dashboard

All 3 display planes with their own swapchains, I believe, rendering in realtime... You can see this because the Dashboard, Game and App are all still running when all 3 are in view...

WP_20140902_001.jpg



An example of how you would do this with the GPU OVERLAYS SUPPORT introduced in DX11.2 is here :

http://msdn.microsoft.com/en-us/library/windows/apps/dn448913.aspx


For me an application developer, I love this feature because it would let me create a game/app with the following layout

1. Direct3D - game/3D world for rendering Autocad models etc.
2. Direct2D/XAML/HTML - HUD/UI layer that overlays the 3D content.

I have always wanted multiple swapchains that I can use to compose my apps/games and leverage D3D/D2D/XAML/HTML at the layer that makes sense..

p.s. we also now have DirectComposition that allows us to mix different windowing technologies (win32 + modern UI stacks) ... We have both Overlay Hardware and DirectComp now to let us build this Hybrid UI's

p.p.s in the above example, when the game and app and dashboard are all in view, it may make energy saving sense to reduce the framerates of the game & app, whilst keeping the dashboard at 60fps..


Here is an example of how, if given the chance, how I would create Warcraft and it's HUD etc ... I would create the 3D world , then layer ontop of it D2D/XAML for the HUD, Forms based UI's, and possibly Sprite based animations...

WarcraftDx.png
 
Last edited by a moderator:
mc6809e said:
The scaling must also be somewhat conservative since it would be a terrible error to down scale to the point where the frame rate could exceed 60fps.
How so? Couldn't you catch cases where frame rate might exceed 60 fps and cap, like on a 'normal' game?

It's the capping that's the problem.

Suppose a heuristic comes up with a scale that causes a frame to render in 1/120th of a second. The reduced scale damages image quality but the increased frame rate gives no benefit (assuming a display with a 60Hz refresh).
 
Its been around a while. They started just on Skyrm and Intel IGP, but I can't find comments now saying its just Intel IGP anymore so it looks like it will work for all GPU's.
 
Well. IMO what I understand of the 'display planes' feature it isn't really going to be any practical use, it feels to me like it's a 'because we could' feature.
All games render the UI over the output from the game scene render, and if there is a resolution difference, it just means an intermediate upscale. It's pretty trivial/cheap; on modern hardware it's going to be a tiny fraction of your frame time. Having the 'display planes', to me, just means the same thing is being done by the system UI instead of the game. I'm not aware of any advantage other than the system's upscale shader gets used (arguably a disadvantage). The system UI isn't free obviously though - and neither will be compositing the display planes and using an upscale filter. It's trivial, sure, but I'd rather just leave it up to the developer as that's the entirely standard way of doing things for basically every platform ever.

Question for you, wouldn't it be better to let the system do all scaling since that processing time comes from the systems reserved cpu/gpu time slice anyways? Also wouldn't it be better to let the system do it to avoid scaling twice, like when the game is minimized in the xbox dashboard or snapped to one side, if the system handled scaling then there would be one scaling step to the size of a tile or snap window, whereas if the devs did the scaling then their code does an upscale and then the system does another scaling step to fit the window either to snapped size or into a tile on the dashboard?
 
Thanks Graham for the reply, and also for the DF interview http://www.eurogamer.net/articles/digitalfoundry-inside-killzone-mercenary
You went into a lot of depth in your answer to me.
Are you planning on using dynamic res & framerate in future titles and if not any reason why not?

Thanks that was an interesting read and watch.
Temporal AA, Dynamic res, dynamic framerate. :LOL:

I understand that it's not a simple thing to do, but it's obviously possible and have been done before as highlighted by Graham.
I'm curious to why we haven't seen and heard more about engines making use of it in either latest titles or upcoming ones.
Considering the big and first party studios must have had access to the XB1's architecture docs so therefor knew about the esram, scaler update, compute power (although not only applies to XB1) etc, sounds like something you would consider implementing from the start.
The architects themselves said they started to see titles employing dynamic res, what titles were they talking about?
I could be reading it wrong but it sounded like they made changes to the scaler to accommodate precisely this. Pretty sure I heard something like that from another interview somewhere with them as well.
 
Thanks Graham for the reply, and also for the DF interview http://www.eurogamer.net/articles/digitalfoundry-inside-killzone-mercenary
You went into a lot of depth in your answer to me.
Are you planning on using dynamic res & framerate in future titles and if not any reason why not?


Thanks that was an interesting read and watch.
Temporal AA, Dynamic res, dynamic framerate. :LOL:

I understand that it's not a simple thing to do, but it's obviously possible and have been done before as highlighted by Graham.
I'm curious to why we haven't seen and heard more about engines making use of it in either latest titles or upcoming ones.
Considering the big and first party studios must have had access to the XB1's architecture docs so therefor knew about the esram, scaler update, compute power (although not only applies to XB1) etc, sounds like something you would consider implementing from the start.
The architects themselves said they started to see titles employing dynamic res, what titles were they talking about?
I could be reading it wrong but it sounded like they made changes to the scaler to accommodate precisely this. Pretty sure I heard something like that from another interview somewhere with them as well.

The article you are referring to is the full digital foundy xbox one architects interview.
From what I gathered from the interview is that they built in the ability to dynamically scale the resolution from any sub 1080p frame to 1080p on the fly. Making it easier for devs making games with dynamic framerate.(They wont need to implement the feature themselves through software.) As far as them seeing titles employing dynamic res I believe they are referring to titles on the Xbox 360.
 
Status
Not open for further replies.
Back
Top