Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I think ML is going to be huge no matter what anyway, be it for reconstruction tech and other uses in gaming, to non gaming uses where theres even a bigger market. Its the same with ray tracing, its a graphical feature thats going to matter going forward in gaming and other uses. It might be costly for NV (and AMD, Intel etc) but those are investments for IHV's future.
 
The results need to start paying off to reflect this because it's not good sign if we're still seeing DLSS support being added post-game launch a year later ...

The pattern so far is that DLSS 2.0 is mostly added after the game's initial launch which suggests that they are constantly retraining their model ...

Or it would indicate that devs of games that have been in development for years, didn't want to tweak their pipelines in the middle of a project.

DLSS was an unproven tech with little support and limit to a small fraction of newly released GPUs when it was first introduced. I think only now, devs are seeing its utility as RTX gpus are far more widespread and the heavy cost of RT is practically unavoidable.

DLSS inclusion at the start of a project is far more likely now. And we will probably see fewer games adding it as an afterthought.
 
Last edited:
Do they need to tweak their pipelines to make dlss work ?

"To implement DLSS2, a game designer will need to use Nvidia’s library in place of their native TAA. This library requires as input: the lower resolution rendered frame, the motion vectors, the depth buffer, and the jitter for each frame. It feeds these into the deep learning algorithm and returns a higher resolution image. The game engine will also need to change the jitter of the lower resolution render each frame and use high resolution textures. Finally, the game’s post processing effects, like depth of field and motion blur, will need to be scaled up to run on the higher resolution output from DLSS. These changes are relatively small, especially for a game already using TAA or dynamic resolution. However, they will require work from the developer and cannot be implemented by Nvidia. Furthermore, DLSS2 is an Nvidia specific blackbox and only works on their newest graphics cards, so that could be limit adoption."
 
"To implement DLSS2, a game designer will need to use Nvidia’s library in place of their native TAA. This library requires as input: the lower resolution rendered frame, the motion vectors, the depth buffer, and the jitter for each frame. It feeds these into the deep learning algorithm and returns a higher resolution image. The game engine will also need to change the jitter of the lower resolution render each frame and use high resolution textures. Finally, the game’s post processing effects, like depth of field and motion blur, will need to be scaled up to run on the higher resolution output from DLSS. These changes are relatively small, especially for a game already using TAA or dynamic resolution. However, they will require work from the developer and cannot be implemented by Nvidia. Furthermore, DLSS2 is an Nvidia specific blackbox and only works on their newest graphics cards, so that could be limit adoption."
It's amazing that dev's can now do this in half a day!
 
April 14, 2021 - GTC 2021 Video
Before the end of 2021, NVIDIA DLSS (Deep Learning Super Sampling) will be natively supported for HDRP in Unity 2021.2.
 
Last edited by a moderator:
Outriders' DLSS does a lot more than just improve performance | Rock Paper Shotgun
April 15, 2021
It also adds in extra details missing at native resolution
...
Admittedly, at a quick glance, it doesn't look like much has really changed with DLSS Quality enabled, and in motion you'd probably be hard-pushed to notice the difference as well. But if you like to take your time in Outriders once you've finished clearing its rooms of goons and trying on all the different kind of space trousers they've left behind, then it's definitely worth switching on, as I think it not only makes textures look sharper and more defined, but it also adds in extra details you just don't get when playing without it.
 
Energy loss artifacts in Nvidia's own demo ?!
To be honest, in this case, things seem to be the opposite.

There is RTXGI in this demo, and since it's a GI, it's additive to the lighting, i.e. it adds more lighting to the scene, so brighter image with DLSS might as well be caused by this.
In this case, decoupled probes should be updated at 2x speed with DLSS and per-probes rays budged is usully something like 144 rays per probe per frame.
This would obviously lead to faster lighting convergence with DLSS On and probably a little bit more energy with DLSS On.

Though, unlike RTXGI, which is decoupled from screen resolution and doesn't require screen space denoising, there is also the RTXDI, which is heavily reliant upon sampling in screen space and upon denoising.
Usually, modern denoisers filter out very bright signal to supress the shimmering "fireflies" graphics artifacts, which would be visible otherwise, hence image becomes darker due to the energy loss you mentioned (thought, the "bias" is more common term here since it addresses both the energy loss and other systematic errors in lighting).
Denoisers' blurring can also introduce some additional bias and this bias is typically higher in lower res, so reconstructing signal from lower res can add up to the energy loss.

Unfortunately, we don't have an unbiased path traicing reference images to compare against, so it's impossible to tell which image DLSS On or Off has more bias here, but I would not be worry about that anyway since there are literally thousands of other things which introduce discrepancies with the unbiased path tracing rendering.
 
Last edited:
Less pixels = less rays. DLSS' internal resolution is less, and thus likely affecting lighting/quality, and causing it to lean more on DLSS/Denoiser to construct the image.

It would be a huge surprise if ray traced effects didn't scale with the native resolution because I don't see how the performance gains would be anywhere near as convincing with DLSS enabled if games were tracing just as much rays on both the original and internal resolution for DLSS ...

To be honest, in this case, things seem to be the opposite.

There is RTXGI in this demo, and since it's a GI, it's additive to the lighting, i.e. it adds more lighting to the scene, so brighter image with DLSS might as well be caused by this.
In this case, decoupled probes should be updated at 2x speed with DLSS and per-probes rays budged is usully something like 144 rays per probe per frame.
This would obviously lead to faster lighting convergence with DLSS On and probably a little bit more energy with DLSS On.

Though, unlike RTXGI, which is decoupled from screen resolution and doesn't require screen space denoising, there is also the RTXDI, which is heavily reliant upon sampling in screen space and upon denoising.
Usually, modern denoisers filter out very bright signal to supress the shimmering "fireflies" graphics artifacts, which would be visible otherwise, hence image becomes darker due to the energy loss you mentioned (thought, the "bias" is more common term here since it addresses both the energy loss and other systematic errors in lighting).
Denoisers' blurring can also introduce some additional bias and this bias is typically higher in lower res, so reconstructing signal from lower res can add up to the energy loss.

Unfortunately, we don't have an unbiased path traicing reference images to compare against, so it's impossible to tell which image DLSS On or Off has more bias here, but I would not be worry about that anyway since there are literally thousands of other things which introduce discrepancies with the unbiased path tracing rendering.

It is true that we don't have reference path traced results but I would at least think that the "unfiltered result" (DLSS disabled) would be closer to the ground truth because filtering can introduce bias like you raised. I think the ethos here would be is that the less filters, the better ? In this case, it's the DLSS image that's noticeably dimmer compared to the original image in the results above since we're missing patches of high intensity lights on the couch and the and on the blanket if we take a closer look.
 
Back
Top