DavidGraham
Veteran
VRS is already in the game, it's Tier 1 though, the developer is also working on adding Ray Tracing and possibly DLSS too.they also mention adding VRS to the PC version of the game.
VRS is already in the game, it's Tier 1 though, the developer is also working on adding Ray Tracing and possibly DLSS too.they also mention adding VRS to the PC version of the game.
VRS is already in the game, it's Tier 1 though, the developer is also working on adding Ray Tracing and possibly DLSS too.
Yeah, it's typically calculated on a frame by frame basis. The implementations I've read about all talk about measuring how long it takes to do your current frame, then guestimating how long you'll need to do the next. So you adjust the resolution of your next frame based on how long the last one took.
Something like a smoke bomb going off, or an object moving onto or off screen, or a flame thrower firing, tend to grow or shrink over time so the cost grows or reduces over a number of frames. That way the current frame is somewhat indicative of the next. If you're conservative enough you'll always or almost always make the next frame in time, if you push a bit close you might still miss completing the next frame in time, so have to drop a frame or tear (normally at the top part of the screen so it shouldn't be too visible).
Some dynamic res games have a point they won't reduce below - a minimum threshold - at which point they'll drop frames or tear. I think it's likely that below a certain point the time savings from shrinking your res will reduce, as there are certain costs that don't reduce linearly with resolution (GPUs tend to become less efficient at lower resolutions).
Plus there's probably a point where being blurry as heck is seen as worse than losing a few frames....
If I were to take a shot in the dark, they likely run a step right before rasterization to determine the size of the output buffer before it is rendered.A-ha! So there is some predictability.
I’ve been wondering this for ages, when exactly the engine decides to render a frame at a lower res and how to make sure that it’s not too late by then?
If there is a level of predictability then that explains a lot!
Last Gen there was a racing game that had "static-dynamic resolution" that would adjust based on the track position, where the dev had to manually tune it. I can't recall the game, but it had a lot of forum chatter at the time. Maybe it was boat racing too?
We've come a long way since then.
Redout - it was quite the feather ruffler when DF didn't get it right and the devs got angry. They eventually posted some details:
http://34bigthings.com/redout-a-technical-analysis-on-the-offline-dynamic-resolution-scaler/
Last Gen there was a racing game that had "static-dynamic resolution" that would adjust based on the track position, where the dev had to manually tune it. I can't recall the game, but it had a lot of forum chatter at the time. Maybe it was boat racing too?
We've come a long way since then.
If I were to take a shot in the dark, they likely run a step right before rasterization to determine the size of the output buffer before it is rendered.
So they must make a calculation of the vertices etc and call it based on that. I think it has a greater problem with doing dynamic resolution on alpha effects, because I believe those effects happen later down the pipeline and its too late to resize the buffer again. So either you run lower resolution alphas or you suffer a frame dip. At least this is what I think is happening.
This may not apply to compute based engines however, just the traditional 3d pipeline based ones.
All the stuff I've read talks about using the last frame's time to complete, and then setting the res for the next frame so you do all the next frame's rasterisation at the new res. I haven't read about anyone performing some kind of vertex pre-processing for the current frame to set resolution (though it could be the case), and as you say it would miss some expensive events like transparency (a real performance sink), most lighting, and post process effects. It could also come at a bit of a cost, I guess. The great thing about the last frame is it gives you a really good idea about what the next frame will be like, alpha and lighting and post processing and all - it's a very cheap calculation (literally just a count) and gives a fairly small window as to where your next frame will land.
Things may now be a bit more complex than that in implementation - perhaps using a curve of frame times against pixel counts to try and extrapolate as to where the next one will land and how it might need to be adjusted. Perhaps we'll end up with some kind of funky ML based solution that looks at past frame but also uses hints on what you're planning for the next frame to make predictions more accurate and less conservative.
I was thinking about alpha too, given its impact, and was thinking you might be able to do some of that as a separate pass and measure the cost of that separately. If you could scale something like low contrast alpha separately it might mean you could preserve your sharper, more well defined opaque geometry. Maybe. I suppose it depends on what you're using alpha for...?
A-ha! So there is some predictability.
what do you think is the cost of a forward predictive assessment? That was sort of my worry, if it's too large of a calculation, then it may not be worthwhileDynamic adaptive algorithms generally come in two types: reactive and predictive. In terms of resolution targets, reactive means you assess the time it took to render the previous frame(s) and when you exceed the target (33ms, 16ms etc) you lower the resolution. Likewise, if you're rendering below maximum resolution and you are rendering frames well within target then you increase the resolution. This is a really simple and most common approach.
The other type of predictive, as if there is a pre-render assessment done and the engine will make best guess of the target resolution based on what it knows about the frame, this can be factors like interior vs exterior, number and type of assets on screen, number and type of light sources etc - the fundamental elements that when they increase will impact render time, including the times it's taken to render previous frames if they similar in complexity. You don't only want to look forward, you want to look back as well.
Yeah, it sounded like they set the resolution per section of track based on what they calculated was necessary to hit 60fps on the target HW.So how does it perform now in BC modes, still adjusted down in resolution despite having more GPU?
what do you think is the cost of a forward predictive assessment? That was sort of my worry, if it's too large of a calculation, then it may not be worthwhile
no they basically remastered all of the levels from hitman 1 and 2 using the same engine as hitman 3, so they should have the same performance as a regular hitman 3 missionI'm playing now hitman 3 and see that it's not have missions from 1 and 2 and I have to buy h2 or h1 to play it. Now I wonder if df video Miami scene mission from Hitman 2 wasn't just backward compatibility.
The basics of the upgrade are already out there, courtesy of Remedy PR. There's feature parity between PlayStation 5 and Xbox Series X with Control rendering at a native 1440p resolution, with temporal upsampling to 4K. Two modes are on offer - a 30fps capped experience featuring ray traced reflections (including transparencies) along with a 60fps performance mode without RT features. Meanwhile, Xbox Series S has no RT features, meaning a performance mode as the standard that renders natively at 900p, with a 1080p output. Dynamic resolution scaling is not implemented in this game.
But before we go into the specifics, I feel it's important to recap why this is a landmark game. From my perspective, Control was a glimpse at the future of rendering technology - and even next-generation gameplay. Even factoring out ray tracing, Control is doing a hell of a lot behind the scenes. Take the destruction system, where nearly every object can be broken down into its constituent parts. Then there's the sheer wealth of those objects in any given scene - an all-out firefight with the physics system in full effect is an astonishing spectacle. Then there's the fluid rendering simulation for the Hiss smoke: when objects or enemies traverse through this semi-transparent fluid, there's visible turbulence - a dazzling dance of colour with waves.
And even without hardware RT, Control still uses a form of ray tracing on all systems: signed distance fields are used to deliver coarse, but accurate reflections to augment the standard screen-space effect. Basically, when screen-space data is not available, fall-back reflections are generated by a trace of sorts into a simplified game scene. All told, this is a lot of tech that's not often seen on last generation consoles, and it was born out on the last-gen systems where PS4 Pro ran at native 1080p, while One X topped out at 1440p. Meanwhile, there was the sense that the old Jaguar CPU cores were pushed to breaking point - performance in Control improved via patches, but overall consistency was still an issue.
On the next-gen consoles, PS5 delivers 1.8x the pixel density of PS4 Pro and does so with either double the frame-rate or hardware accelerated ray tracing - a spec matched by Series X. There'll be a lot of discussion about whether to play with RT or to run at 60fps, but Control is an action-packed game and requires some rather fast inputs at times, so for sheer playability, the performance mode is going to be difficult to top. Even so, all modes benefit from extra polish and quality of life improvements - loading times are dramatically improved to the point where PS5 can even stream in data a touch faster than a Core i9 10900K paired with a fast 3.5GB/s NVMe SSD. It's a night and day improvement compared to the last-gen consoles.
We'll be talking specifically about performance in a different piece. Thus far, we've only played PS5 with the day one patch, but we have taken a look at Xbox Series consoles with gold master code. On PlayStation 5, the 60fps mode is mostly solid, with slowdown only really manifesting in the most effects-heavy combat, where the screen is filled with taxing effects work. Meanwhile, the RT mode at a capped 30fps is consistent, properly frame-paced and sticks doggedly to its target for the vast majority of play with only minor deviations. Xbox Series consoles are similar, but what looks like occasional I/O stutter seen in last-gen versions (and also in PC) is present. We'll cover this in more depth in a separate piece with more detailed analysis.
Don't miss out on this one if you haven't played it.Really good work from Remedy with performance adjustment for ps5 as this game is very demanding even on strong pc and settings they chose still looks really good during gameplay and zooming is necessary to catch the differences (also 1.8x pixel difference and 2x fps vs ps4 pro is impressive)