Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

The gains of a few milliseconds outlined in the presentation are certainly nowhere near the TSR's x factors, so the only realistic use case for VRS in games with advanced graphics should be in conjunction with TSR. Regarding quality, I'd not say that VRS has any strengths, as it uses a regular pixel grid for low res samples, essentially functioning like integer upscaling by clustering pixels, which results in a very visible quality loss compared to native resolution. Another reason why it should work together with a temporal upscaler. However, given that you still need to render the high resolution gbuffer with VRS, more attractive alternatives may exist, such as rendering a coverage mask at a higher resolution and guide an upscaler with it to produce perfect high resolution edges, potentially providing even better scaling factors.

That's just for shading, temporal super resolution effectively undersamples a lot of other things as well. For VRS to match one to one you'd turn down rays per pixel, shadowmap resolution, etc. etc. until it matches the sampling resolution of the lower base resolution TSR operates at. Which, obviously, you can do and TAA can just upscale that stuff as normal.

And I believe your mistaken in how much performance TSR actually manages to claw back. Looking at a quick benchmark of Deathloop, DLSS Quality (1440p) goes from 16.6ms to Balanced (1080p) 14.9ms. That's a gain of just 1.7ms, while turning on/off VRS @1620p gains a ms here, without dropping RT, shadowmap, or any other setting. Yes DLSS has initially bigger gains, but we'd expect bigger gains from VRS the higher the base resolution is as well, as we'd tend towards exponentially more subshaded tiles, and probably a bigger gain from not overflowing available caches, 2080 is relatively better at 1440p than 4k as it is.

In an ideal scenario the only additional cost VRS has over TSR is the higher gbuffer fill, otherwise settings can be matched one to one, and VRS tile classification ends up cheaper than a lot of the upscaling algorithms. Meanwhile VRS is a much better option for image quality. Certainly the gbuffer overhead means VRS is always going to be more expensive 1 to 1, but for a given image quality target VRS alone will win on the higher end.

In a more realistic scenario we're likely to see exactly what we see in the presentation, TSR used to upscale say, 2:1 (1620p for 4k) which is generally the sweet spot for TSR in terms of avoiding major artifacts. Meanwhile VRS can still be turned on for another ms gain, getting you back as much performance as dropping the base resolution to say 1350p, and if spatiotemporally stochastic shading selection is used the image quality difference between VRS on/off will be almost non existent. That being said, Playground Games (Forza Horizon, Fable) are trying really hard to get VRS only to work (at least on Series X), as they like their image quality turned way up high.
 
Last edited:
julian-calle-city-scifi.jpg

What the next Cyberpunk could look like.
 
The 5.4 improvements to threading and CPU utilization are great but are a little concerning at the same time. We’ve had many core CPUs for a very long time now. How is it that a premier well funded engine like UE5 isn’t designed from day one with this in mind? Is multi-threaded rendering still a hard nut to crack?
 
The 5.4 improvements to threading and CPU utilization are great but are a little concerning at the same time. We’ve had many core CPUs for a very long time now. How is it that a premier well funded engine like UE5 isn’t designed from day one with this in mind? Is multi-threaded rendering still a hard nut to crack?
It doesn't really help that UE's MassEntity framework is still half baked today ...
 
The 5.4 improvements to threading and CPU utilization are great but are a little concerning at the same time. We’ve had many core CPUs for a very long time now. How is it that a premier well funded engine like UE5 isn’t designed from day one with this in mind? Is multi-threaded rendering still a hard nut to crack?

Something's just don't scale well above a certain number of cores as the increase in latency starts causing a performance regression.

And more systems you have in your game engine the higher the risk that you'll run in to latency issues.
 
Something's just don't scale well above a certain number of cores as the increase in latency starts causing a performance regression.

And more systems you have in your game engine the higher the risk that you'll run in to latency issues.

Clearly Epic was able to find significant gains from threading. My question was why is this an afterthought and not a fundamental tenet of a modern engine.
 
UE5 has loads of legacy codebase as it's not really a clean slate modern engine but a continuous upgrade project all the way back from UE1 (probably).

Doesn't really explain it. We’ve had 8 core CPUs on PC for over a decade and on consoles for an entire generation.
 
There are some novel engines like Flax, Unigine and Stride. They don't have legacy baggage so can use modern techniques, but also don't have the toolchains and features of the biggest engines.
 
Doesn't really explain it. We’ve had 8 core CPUs on PC for over a decade and on consoles for an entire generation.

Last gen console having having 8 cores wasn't going to change anything as they had piss poor clocks (1.6Gz) which meant a first gen i7 from 2008 was just as quick (if not slightly faster)
 
In my opinion, the newer Snowdrop brings the best performance on consoles, 1800p, 40FPS, with Mesh Shading, geometry pipeline. Avatar looks insane! But even this is not a completely new engine.
Snowdrop has historically had some of the best multi-core rendering and game engine balancing for a long time, since early Division days at least.
 
Back
Top