Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I haven't used DSR in quite awhile but from what I remember when it first launched the impressions was that anything other than factor of 4 (eg. 4k ->1080p) down scaling was less than ideal in terms of the ends results, especially without a lot of manual adjustments of the smoothing setting. This of course meant there was an inherent usability issue in that the performance requirement jump would be quite big to achieve x4. If this just From the preview screenshot it seems like this focused on in between steps with 1.78x and 2.25x scaling with DL.

What would interesting is to compare DLDSR on a lower resolution display against DLSS on a higher resolution display. For instance 1080p DLDSR vs. 1440p DLSS/FSR/XeSS. 1440p DLDSR vs. 4K DLSS/FSR/XeSS.
 
What would interesting is to compare DLDSR on a lower resolution display against DLSS on a higher resolution display. For instance 1080p DLDSR vs. 1440p DLSS/FSR/XeSS. 1440p DLDSR vs. 4K DLSS/FSR/XeSS.

DLSS will be much better. It works with more input informations and has more output pixel. Even downsampling with DSR 4x and DLSS@Performance will be a better option.
 
Very curious about this Deep Learning based down sampling - interesting.
I wonder what they are doing exactly and if it will be any different from doing downsampling + DLAA on top of the downsampled image.

DLSS will be much better. It works with more input informations and has more output pixel. Even downsampling with DSR 4x and DLSS@Performance will be a better option.
Yeah but DSR doesn't require game side integration so it can't be DLSS-up-to-a-res-and-then-downsample.
DLDSR has to work with all games.
 
Last edited:
DLSS will be much better. It works with more input informations and has more output pixel. Even downsampling with DSR 4x and DLSS@Performance will be a better option.

I suspect so but I'd still be interested in how these things compare.

I know GPU coverage tends to slant towards a versus between the IHVs. However what I'm more interested in with all these new scaling technologies is to how they affect the single native resolution lock that we've basically been accustomed to since the transition to LCDs.

I wonder what they are doing exactly and if it will be any different from doing downsampling + DLAA on top of the downsampled image.

From the given information it doesn't require specific game support (it's a driver side setting that works with any game that DSR works with is my understanding) which would suggest something completely separate from DLSS/DLAA as no motion vectors would be used.

The down sampling used by DSR from what I know is very basic and likely not very efficient by extension in terms of working with what is has. This is why result wise it was ideal to essentially brute force 4 pixels into 1.
 
I haven't used DSR in quite awhile but from what I remember when it first launched the impressions was that anything other than factor of 4 (eg. 4k ->1080p) down scaling was less than ideal in terms of the ends results, especially without a lot of manual adjustments of the smoothing setting. This of course meant there was an inherent usability issue in that the performance requirement jump would be quite big to achieve x4. If this just From the preview screenshot it seems like this focused on in between steps with 1.78x and 2.25x scaling with DL.

What would interesting is to compare DLDSR on a lower resolution display against DLSS on a higher resolution display. For instance 1080p DLDSR vs. 1440p DLSS/FSR/XeSS. 1440p DLDSR vs. 4K DLSS/FSR/XeSS.

I use DSR all the time and yeah anything aside from 4x looks bad.

Is the downsampling in DLDSR AI based or is it just using DLSS to upscale to 4K and using the same downsampling technique that DSR currently uses? That would be cool but not as impressive.
 
Is the downsampling in DLDSR AI based or is it just using DLSS to upscale to 4K and using the same downsampling technique that DSR currently uses? That would be cool but not as impressive.
The second option would be game specific since DLSS must be integrated into the engine. It is far more likely an AI based downsample - which could be cool but then I never really understood why DSR's downsample was so bad to begin with.
 
The second option would be game specific since DLSS must be integrated into the engine. It is far more likely an AI based downsample - which could be cool but then I never really understood why DSR's downsample was so bad to begin with.

That makes sense. So is the benefit then that you can render at 1620p and AI downsample to 1080p with similar quality to 4K rendering downsampled to 1080p using the old method? In that case there’s really no DLSS upsampling involved.
 
I assume the none AI version will still be available for us non RTX peasants? I used to use it all the time on my old 1080p monitor and thought it definitely improved the image at lower scaling factors than 4x (which was was too much for my 1070 in most cases).
 
That makes sense. So is the benefit then that you can render at 1620p and AI downsample to 1080p with similar quality to 4K rendering downsampled to 1080p using the old method? In that case there’s really no DLSS upsampling involved.

My understanding of DSR is that the image is effectively blurred with a 13 tap gaussian filter before scaling downwards to provide the AA.

Is this technique any use in normal (non vr) rendering
https://uploadvr.com/quest-2-application-spacewarp/

The problem inherent with frame interpolation techniques is the input lag issue.

I've seen discussion about the possibility of various methods of doing this being applied in the "regular" gaming space in order to support very high refresh displays. 240hz (much less higher) for AAA SP gaming is likely not going to be possible, due to CPU/memory limitations regardless of the GPU, without something like this. By extension the practical implications of the increased latency would also be more manageable (eg. doubling 120 fps vs 60 fps).
 
Hang on, are they actually claiming that that DLDSR can provide a supersampled image at pretty much native performance - on virtually any game??? That seems to be what they're claiming in this screenshot but that would be a ridiculous game changer. Comparable to DLSS but applicable to any game...

nvidia-dldsr-ai-deep-learning-dynamic-super-resolution-performance-image-quality-comparison.jpg
 
Hang on, are they actually claiming that that DLDSR can provide a supersampled image at pretty much native performance - on virtually any game??? That seems to be what they're claiming in this screenshot but that would be a ridiculous game changer. Comparable to DLSS but applicable to any game...

The performance numbers seem off but it's nothing to do with DLDSR specifically. Just compare the DSR 4X (4K) drop vs Native 1080p. 143 fps (DLDSR 2.25x 1620p) vs. 105 fps (DSR 4X 4k) is actually plausible. It might just the 1080p number is too low or it's very CPU/memory limited. This is TPU's numbers for the RTX 2060 and Prey - https://www.techpowerup.com/review/nvidia-geforce-rtx-2060-founders-edition/22.html

It's also interesting that the DLDSR result seems much better than the DSR 4X result.
 
Last edited:
Is DLAA actually implemented anywhere?

This DLDSR might be cool but I’m finding it very easy to be gpu limited on a 3080 for the newest games.. Great for old games though.
 
Not entirely sure what to think here. The SSRTGI business makes the comparo more difficult than it needs to be, and for a proper DLDSR vs Native Rez comparison it's (IMO) bad form.

At the end of the day, I'd actually prefer a more DLSS-like ability to do naive upscaling on "any" app -- specifically, I want MSFS at far higher settings while I'm using my Oculus VR headset and maintaining high FPS.
 
Not entirely sure what to think here. The SSRTGI business makes the comparo more difficult than it needs to be, and for a proper DLDSR vs Native Rez comparison it's (IMO) bad form.

At the end of the day, I'd actually prefer a more DLSS-like ability to do naive upscaling on "any" app -- specifically, I want MSFS at far higher settings while I'm using my Oculus VR headset and maintaining high FPS.

That shot wasn't supposed to be a DLDSR vs native only comparison. On NV's website it's billed as an example of how you can use the features in combination to "re-master" older games.
 
Back
Top