Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Temporal AA also requires motion vectors

Ideally you don't have motion vectors, but a vector to the previous frame if the same point of the tri was visible then (and some out of bound value if not, also might want to have a flag for highly reflective surfaces). That gives a spatiotemporal filter the optimum information to work with. Would require rather invasive changes in the game shaders to determine and store that information though.
 
Last edited:
Update is live, DLSS has vastly improved indeed, though it now works only @4K on the 2080Ti
4K60 Ultra DXR is attainable with DLSS


If 4K Ultra DXR would be attainable, DLSS would not be needed right.
It's easy to mislead consumers that way, as most don't understand the difference between native rendering resolution and the upscaled resolution.
Inform consumers correctly like for example:
1440p rendering upscaled by DLSS to 4K.
 
If 4K Ultra DXR would be attainable, DLSS would not be needed right.
It's easy to mislead consumers that way, as most don't understand the difference between native rendering resolution and the upscaled resolution.
Inform consumers correctly like for example:
1440p rendering upscaled by DLSS to 4K.
I respect the position here and I agree. However, it’s also important that this technology have distance from traditional upscaling methods, or even reconstruction methods. To mix them as being the same would be a disservice just so that we could label it as “upscaling” for easier consumer understanding. DLSS reconstructs the image and adds in details that upscaling could not. It does not suffer from ghosting like reconstruction algorithms do.

Inherently, DLSS, stands for deep learning super sampling, to me, nvidia is not overreaching their misleading marketing here.
 
To mix them as being the same would be a disservice just so that we could label it as “upscaling” for easier consumer understanding. DLSS reconstructs the image and adds in details that upscaling could not.

Inherently, DLSS, stands for deep learning super sampling, to me, nvidia is not overreaching their misleading marketing here.

I think hallucination is a more appropriate term than reconstruction.
 
Approximation is probably the more technical term. It’s not making stuff out of thin air :). It had to be trained.

Hallucination IS a technical term for infering high frequency detail out of a lower frequency signal using some kind of algorithm. It does not imply the detail comes out of thin air. Which is exactly what DLSS is doing, and the algorithm they use is of the example-based variety, more specifically neural networks, which is now called under the more marketable blanket term AI.
For example:
Google for Image Hallucination and this is the first result:
Patch-Based Image Hallucination for Super Resolution with Detail
 
Last edited:
Hallucination IS a technical term for infering high frequency detail out of a lower frequency signal using some kind of algorithm. It does not imply the detail comes out of thin air. Which is exactly what DLSS is doing, and the algorithm they use is of the example-based variety, more specifically neural networks, which is now called under the more marketable blanket term AI.
For example:
Resultados da Web
Patch-Based Image Hallucination for Super Resolution with Detail
It's the first i've heard of that term being used, but it's still appropriate. Some general differences between that and DLSS -
"(d) Our algorithm, on the other hand, uses an improved patch-based optimization that leverages sample images from a large image database which enables it to synthesize plausible novel detail."

"To do this, the input image is first upsampled na¨ıvely to the target resolution and then segmented into texturally-similar segments. For a given pixel p, they use its surrounding texture information to search for 10 similarly textured segments in a universal image database using a filter bank, and then choose the closest patch to p’s from these segments."
^ i'm 99% positive DLSS does _not_ do this.

DLSS aims to perform antialiasing and upscaling/inference based on the source image(as I read it).
The hallucination is putting in what it thinks could be there based upon a different training set, so it's quite a bit more creative in terms of it's freedom.

On this, my analogy would be closer that DLSS is closer to being a standard language translator (waits for sentence to complete before translating it). Where hallucination might be closer to a real-time language translator (translating the sentence while the sentence is still being fed in as an input) - where it's taking some big leaps without the data to fill in the gaps.

edit: but this is cool shit
 
Last edited:
I just made a quick couple of screenshot zoomouts of the same rendered image point of view
With 4K and with 1440p upscaled by DLSS to 4K (using latest patch)
I probably don't need to tell what is what :)
Hint:
on the left in the middle of the pylon see two close straight lines going up
on the right you see one mangled up line
 

Attachments

  • 4kvsDLSS4k.png
    4kvsDLSS4k.png
    88.5 KB · Views: 28
Last edited:
Video at 4K with DLSS enabled. Pretty impressive for the fps gained.
Mar 21, 2019
 
Last edited by a moderator:
DLUS would be more honest for an undersampling filter. Does it upscale or does it try to anti-alias an image?

PS. upscaling it is, yeah calling that supersampling is taking the piss.

PPS. if the filter isn't spatio-temporal it's going to be severely restricted in introducing new high frequency detail.
 
Last edited:
DLUS would be more honest for an undersampling filter. Does it upscale or does it try to anti-alias an image?

PS. upscaling it is, yeah calling that supersampling is taking the piss.
For training the First pass is antialiasing the image. Second pass is bringing the image up to 4K.
The model is trained to do both when you do 4K DLSS.
 
I just made a quick couple of screenshot zoomouts of the same rendered image point of view
With 4K and with 1440p upscaled by DLSS to 4K (using latest patch)
I probably don't need to tell what is what :)
Hint:
on the left in the middle of the pylon see two close straight lines going up
on the right you see one mangled up line
Yea that's not bad, I would like to say it could be better but 16ms is a tight window. What does 1440p look like just for curiosity sake?
 
For training the First pass is antialiasing the image. Second pass is bringing the image up to 4K.
The model is trained to do both when you do 4K DLSS.

Sorry not buying it, the user gets an upscaling filter ... calling it supersampling is just a plain old lie. Even calling it AA would be pushing it ... SS is just fucking disgusting.
 
Sorry not buying it, the user gets an upscaling filter ... calling it supersampling is just a plain old lie. Even calling it AA would be pushing it ... SS is just fucking disgusting.
ok.
There's a big difference between saying it's a DLSS has poor performance, and saying it's a straight up lie what the algorithm is doing.
A: The DLSS team first extracts many aliased frames from the target game, and then for each one we generate a matching “perfect frame” using either super-sampling or accumulation rendering. These paired frames are fed to NVIDIA’s supercomputer. The supercomputer trains the DLSS model to recognize aliased inputs and generate high quality anti-aliased images that match the “perfect frame” as closely as possible. We then repeat the process, but this time we train the model to generate additional pixels rather than applying AA. This has the effect of increasing the resolution of the input. Combining both techniques enables the GPU to render the full monitor resolution at higher frame rates.
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
 
Back
Top