Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
VRS is already in the game, it's Tier 1 though, the developer is also working on adding Ray Tracing and possibly DLSS too.

Perhaps that could be a factor in not using Dynamic Res. From The Coalition's posting on their VRS Tier 2 implementation, it seems that dynamic res doesn't play nicely with Tier 1 VRS, so maybe it was a lower priority and they chose to focus on other areas at this time. DF would probably have spotted if console Hitman 3 was running Tier 1 VRS.

The Coalition described their Tier 2 solution as having "no noticeable" loss of quality at highest quality settings, while still being up to 14% faster. Works fine with dynamic res too. Seems like it took a fair amount of effort to get it working so well though:

https://devblogs.microsoft.com/directx/gears-vrs-tier2/
 
Yeah, it's typically calculated on a frame by frame basis. The implementations I've read about all talk about measuring how long it takes to do your current frame, then guestimating how long you'll need to do the next. So you adjust the resolution of your next frame based on how long the last one took.

Something like a smoke bomb going off, or an object moving onto or off screen, or a flame thrower firing, tend to grow or shrink over time so the cost grows or reduces over a number of frames. That way the current frame is somewhat indicative of the next. If you're conservative enough you'll always or almost always make the next frame in time, if you push a bit close you might still miss completing the next frame in time, so have to drop a frame or tear (normally at the top part of the screen so it shouldn't be too visible).

Some dynamic res games have a point they won't reduce below - a minimum threshold - at which point they'll drop frames or tear. I think it's likely that below a certain point the time savings from shrinking your res will reduce, as there are certain costs that don't reduce linearly with resolution (GPUs tend to become less efficient at lower resolutions).

Plus there's probably a point where being blurry as heck is seen as worse than losing a few frames....

A-ha! So there is some predictability.

I’ve been wondering this for ages, when exactly the engine decides to render a frame at a lower res and how to make sure that it’s not too late by then?

If there is a level of predictability then that explains a lot!
 
Last Gen there was a racing game that had "static-dynamic resolution" that would adjust based on the track position, where the dev had to manually tune it. I can't recall the game, but it had a lot of forum chatter at the time. Maybe it was boat racing too?

We've come a long way since then.
 
A-ha! So there is some predictability.

I’ve been wondering this for ages, when exactly the engine decides to render a frame at a lower res and how to make sure that it’s not too late by then?

If there is a level of predictability then that explains a lot!
If I were to take a shot in the dark, they likely run a step right before rasterization to determine the size of the output buffer before it is rendered.
So they must make a calculation of the vertices etc and call it based on that. I think it has a greater problem with doing dynamic resolution on alpha effects, because I believe those effects happen later down the pipeline and its too late to resize the buffer again. So either you run lower resolution alphas or you suffer a frame dip. At least this is what I think is happening.

This may not apply to compute based engines however, just the traditional 3d pipeline based ones.
 
Last Gen there was a racing game that had "static-dynamic resolution" that would adjust based on the track position, where the dev had to manually tune it. I can't recall the game, but it had a lot of forum chatter at the time. Maybe it was boat racing too?

We've come a long way since then.

Redout - it was quite the feather ruffler when DF didn't get it right and the devs got angry. They eventually posted some details:
http://34bigthings.com/redout-a-technical-analysis-on-the-offline-dynamic-resolution-scaler/
 
Last Gen there was a racing game that had "static-dynamic resolution" that would adjust based on the track position, where the dev had to manually tune it. I can't recall the game, but it had a lot of forum chatter at the time. Maybe it was boat racing too?

We've come a long way since then.

Yeah, I was thinking of exactly that game when I said "typically". :D

As TheAISpark points out, it was called Redout and it caused quite a kerfuffle ! Long before Redout games like Rage has been using dynamic res (real time, per frame!) since the 360 days.

If I were to take a shot in the dark, they likely run a step right before rasterization to determine the size of the output buffer before it is rendered.
So they must make a calculation of the vertices etc and call it based on that. I think it has a greater problem with doing dynamic resolution on alpha effects, because I believe those effects happen later down the pipeline and its too late to resize the buffer again. So either you run lower resolution alphas or you suffer a frame dip. At least this is what I think is happening.

This may not apply to compute based engines however, just the traditional 3d pipeline based ones.

All the stuff I've read talks about using the last frame's time to complete, and then setting the res for the next frame so you do all the next frame's rasterisation at the new res. I haven't read about anyone performing some kind of vertex pre-processing for the current frame to set resolution (though it could be the case), and as you say it would miss some expensive events like transparency (a real performance sink), most lighting, and post process effects. It could also come at a bit of a cost, I guess. The great thing about the last frame is it gives you a really good idea about what the next frame will be like, alpha and lighting and post processing and all - it's a very cheap calculation (literally just a count) and gives a fairly small window as to where your next frame will land.

Things may now be a bit more complex than that in implementation - perhaps using a curve of frame times against pixel counts to try and extrapolate as to where the next one will land and how it might need to be adjusted. Perhaps we'll end up with some kind of funky ML based solution that looks at past frame but also uses hints on what you're planning for the next frame to make predictions more accurate and less conservative.

I was thinking about alpha too, given its impact, and was thinking you might be able to do some of that as a separate pass and measure the cost of that separately. If you could scale something like low contrast alpha separately it might mean you could preserve your sharper, more well defined opaque geometry. Maybe. I suppose it depends on what you're using alpha for...?
 
All the stuff I've read talks about using the last frame's time to complete, and then setting the res for the next frame so you do all the next frame's rasterisation at the new res. I haven't read about anyone performing some kind of vertex pre-processing for the current frame to set resolution (though it could be the case), and as you say it would miss some expensive events like transparency (a real performance sink), most lighting, and post process effects. It could also come at a bit of a cost, I guess. The great thing about the last frame is it gives you a really good idea about what the next frame will be like, alpha and lighting and post processing and all - it's a very cheap calculation (literally just a count) and gives a fairly small window as to where your next frame will land.

Things may now be a bit more complex than that in implementation - perhaps using a curve of frame times against pixel counts to try and extrapolate as to where the next one will land and how it might need to be adjusted. Perhaps we'll end up with some kind of funky ML based solution that looks at past frame but also uses hints on what you're planning for the next frame to make predictions more accurate and less conservative.

I was thinking about alpha too, given its impact, and was thinking you might be able to do some of that as a separate pass and measure the cost of that separately. If you could scale something like low contrast alpha separately it might mean you could preserve your sharper, more well defined opaque geometry. Maybe. I suppose it depends on what you're using alpha for...?

edit:
Yes, your answer is correct and mine in wrong. The reason is fairly simple, if it's based on the pre-load before rasterization, more powerful GPUs will still downscale the resolution for no reason. If you base it on a calculation based on the previous frame time, as it gets closer to the frame limit (say 16.6ms) the danger zone will be a few ms before that. So you start reducing the buffers to keep it held so you always hit that 16.6ms. If you are ripping through frames with new GPUs at 10-12ms, it's just going ot keep holding that maximum resolution.

My method would perpetually keep the DRS low, which is wrong.

I think with ML based solutions, it would take too long and be largely unnecessary, you need deep learning for very specific tasks that people are good at that computers are naturally bad at. But simple math algorithms they are champs at, so they should be able to do this without needing anything too exotic. Today it looks like it works fine that way so far, it may be more difficult to implement that is given credit for, and may be very difficult or not necessary useful to apply in all engines depending on how they were designed.
 
Last edited:
A-ha! So there is some predictability.

Dynamic adaptive algorithms generally come in two types: reactive and predictive. In terms of resolution targets, reactive means you assess the time it took to render the previous frame(s) and when you exceed the target (33ms, 16ms etc) you lower the resolution. Likewise, if you're rendering below maximum resolution and you are rendering frames well within target then you increase the resolution. This is a really simple and most common approach.

The other type of predictive, as in there is a pre-render assessment done and the engine will make best guess of the target resolution based on what it knows about the frame, this can be factors like interior vs exterior, number and type of assets on screen, number and type of light sources etc - the fundamental elements that when they increase will impact render time, including the times it's taken to render previous frames if they similar in complexity. You don't only want to look forward, you want to look back as well.
 
Last edited by a moderator:
Dynamic adaptive algorithms generally come in two types: reactive and predictive. In terms of resolution targets, reactive means you assess the time it took to render the previous frame(s) and when you exceed the target (33ms, 16ms etc) you lower the resolution. Likewise, if you're rendering below maximum resolution and you are rendering frames well within target then you increase the resolution. This is a really simple and most common approach.

The other type of predictive, as if there is a pre-render assessment done and the engine will make best guess of the target resolution based on what it knows about the frame, this can be factors like interior vs exterior, number and type of assets on screen, number and type of light sources etc - the fundamental elements that when they increase will impact render time, including the times it's taken to render previous frames if they similar in complexity. You don't only want to look forward, you want to look back as well.
what do you think is the cost of a forward predictive assessment? That was sort of my worry, if it's too large of a calculation, then it may not be worthwhile
 
what do you think is the cost of a forward predictive assessment? That was sort of my worry, if it's too large of a calculation, then it may not be worthwhile

Basic element scoring and counting can be surprisingly instructive and is very light on resource.
 
I'm playing now hitman 3 and see that it's not have missions from 1 and 2 and I have to buy h2 or h1 to play it. Now I wonder if df video Miami scene mission from Hitman 2 wasn't just backward compatibility.
 
I'm playing now hitman 3 and see that it's not have missions from 1 and 2 and I have to buy h2 or h1 to play it. Now I wonder if df video Miami scene mission from Hitman 2 wasn't just backward compatibility.
no they basically remastered all of the levels from hitman 1 and 2 using the same engine as hitman 3, so they should have the same performance as a regular hitman 3 mission
 
Really good work from Remedy with performance adjustment for ps5 as this game is very demanding even on strong pc and settings they chose still looks really good during gameplay and zooming is necessary to catch the differences (also 1.8x pixel difference and 2x fps vs ps4 pro is impressive)
 
Well, that was to be expected. Its a demanding game even for being 'last gen'.

Alex notes that you def can see the differences without zooming etc.
Medium and everything else low (some lower then low), resolution and checkerboarding artifacts. Framerates aside, ofcourse.

Intresting that in loading, their about as fast. thats before Direct Storage has hit.

Again, around 2070 performance head to head before DLSS for this one.
XCD1yWn
 
Last edited:
  • Like
Reactions: HLJ
https://www.eurogamer.net/articles/digitalfoundry-2021-control-ultimate-edition-next-gen-upgrade
The basics of the upgrade are already out there, courtesy of Remedy PR. There's feature parity between PlayStation 5 and Xbox Series X with Control rendering at a native 1440p resolution, with temporal upsampling to 4K. Two modes are on offer - a 30fps capped experience featuring ray traced reflections (including transparencies) along with a 60fps performance mode without RT features. Meanwhile, Xbox Series S has no RT features, meaning a performance mode as the standard that renders natively at 900p, with a 1080p output. Dynamic resolution scaling is not implemented in this game.

But before we go into the specifics, I feel it's important to recap why this is a landmark game. From my perspective, Control was a glimpse at the future of rendering technology - and even next-generation gameplay. Even factoring out ray tracing, Control is doing a hell of a lot behind the scenes. Take the destruction system, where nearly every object can be broken down into its constituent parts. Then there's the sheer wealth of those objects in any given scene - an all-out firefight with the physics system in full effect is an astonishing spectacle. Then there's the fluid rendering simulation for the Hiss smoke: when objects or enemies traverse through this semi-transparent fluid, there's visible turbulence - a dazzling dance of colour with waves.

And even without hardware RT, Control still uses a form of ray tracing on all systems: signed distance fields are used to deliver coarse, but accurate reflections to augment the standard screen-space effect. Basically, when screen-space data is not available, fall-back reflections are generated by a trace of sorts into a simplified game scene. All told, this is a lot of tech that's not often seen on last generation consoles, and it was born out on the last-gen systems where PS4 Pro ran at native 1080p, while One X topped out at 1440p. Meanwhile, there was the sense that the old Jaguar CPU cores were pushed to breaking point - performance in Control improved via patches, but overall consistency was still an issue.

On the next-gen consoles, PS5 delivers 1.8x the pixel density of PS4 Pro and does so with either double the frame-rate or hardware accelerated ray tracing - a spec matched by Series X. There'll be a lot of discussion about whether to play with RT or to run at 60fps, but Control is an action-packed game and requires some rather fast inputs at times, so for sheer playability, the performance mode is going to be difficult to top. Even so, all modes benefit from extra polish and quality of life improvements - loading times are dramatically improved to the point where PS5 can even stream in data a touch faster than a Core i9 10900K paired with a fast 3.5GB/s NVMe SSD. It's a night and day improvement compared to the last-gen consoles.

We'll be talking specifically about performance in a different piece. Thus far, we've only played PS5 with the day one patch, but we have taken a look at Xbox Series consoles with gold master code. On PlayStation 5, the 60fps mode is mostly solid, with slowdown only really manifesting in the most effects-heavy combat, where the screen is filled with taxing effects work. Meanwhile, the RT mode at a capped 30fps is consistent, properly frame-paced and sticks doggedly to its target for the vast majority of play with only minor deviations. Xbox Series consoles are similar, but what looks like occasional I/O stutter seen in last-gen versions (and also in PC) is present. We'll cover this in more depth in a separate piece with more detailed analysis.
 
Really good work from Remedy with performance adjustment for ps5 as this game is very demanding even on strong pc and settings they chose still looks really good during gameplay and zooming is necessary to catch the differences (also 1.8x pixel difference and 2x fps vs ps4 pro is impressive)
Don't miss out on this one if you haven't played it.
Control was a memorable and enjoyable title for me. I don't love it so much to get an ultimate edition and play it at 60fps or with RT. But if you've never played it before and these features are available (60fps being a big one) I think the game would be even more enjoyable to play.

I still wished they just released this as a patch for original control owners, but it's a great upgrade here for PS5.
 
Status
Not open for further replies.
Back
Top