Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

https://www.computerbase.de/2022-01/god-of-war-benchmark-test/3/
DLSS vs FSR vs native in GoW
DLSS generally give better IQ even than native (w/TAA) but have some ghosting issues on camera pans

Despite the small weaknesses of DLSS in image stability, DLSS was able to convince with a very good result in the analysis so far in God of War. And up to this point it has to be said that the quality of DLSS on Quality in the game simply looks better than the native resolution, despite significantly fewer render pixels. However, for inexplicable reasons, an old bogeyman from DLSS returns in God of War, and it's pretty violent at the same time. We are talking about smearing, which hardly exists with the latest DLSS versions.

In God of War the smudging is suddenly clearly pronounced again, you see it sometimes more and sometimes less in many sequences. Thin objects, primarily branches, are the problem, of which there are a lot in God of War. But hair is also problematic, and with certain motion vectors of the camera it can smear as much as DLSS does during the worst smearing times.
 
Complete analysis by CB in short:
Pro: Better sharpness than native TAA without oversharpening in DLSS perf already, gets even better in DLSS quality, better reconstruction on very fine lines (small adv. in this game, they say)

Con: stability worse than native TAA, some edges flicker on DLSS Q already, gets worse on DLSS perf, smearing on thin objects like twigs of which there are a lot in GoW, and with hair which sometimes is really an issue.

Their - from what I can tell - subjective conclusion: DLSS Q looks better than native.
 
Downloaded the new driver and gave the new SSRTGI and DLDSR options a try. Rig is a 5950x @ ~5GHz on water, 64GB of CL14 ram at 3800 / 1900MHz fabric clock, an EVGA 3080Ti FTW3 at +95MHz core / +1GHz mem set to 117%. power limit displaying on my old Dell u2711 at 2560x1440 @ 60Hz native. Games were varying themes of older: Fallout 3 NV (with a stack of mods installed), Prey (no mods, all the DLC) and The Outer Worlds (also no mods and all the DLC.) The games were using whatever max video settings are available, and I disabled TAA/FXAA/MSAA wherever it gave me the option, since obviously (DL)DSR will be solving that.

First: Fallout 3 NV. The visible differences between DSR 3x and 4x and DLDSR 2.25x are pretty much nil in my opinion. DSR 2x and DLDSR 1.6x (or whatever it is) also seemed pretty much the same, and both are obviously lesser than the others. I think I'll stick with DLDSR 2.25 for now, although FoNV isn't graphically demanding to prove the difference between DLDSR and DSR in terms of framerate. Speaking of graphically demanding, SSRTGI at anything higher than medium is a framerate killer regardless of resolution or (DL)DSR factor. Sadly, like any other screen-space driven lighting effect, it has quirky issues where moving the camera even slightly can result in a drastic lighting change to the room. Further I noticed FoNV needed the z-axis buffer switch "reversed" from the default, or else the lighting effects didn't work and were combined with strange "shadow artifacts" of 2D labels of characters "shaded into" my view. It was bizarre.

Second: Prey. The first time I enabled DLDSR, the screen only showed the upper left corner of the overal display buffer -- as if the downscaling was never taking place so my monitor could only see 1/4ish of the whole view. Hitting ctrl+enter to force windowed mode centered the display but didn't fix the overall problem of the downscaling function missing. I exited the game and re-entered, and all was well with the world. In terms of graphical fidelity of the new DLDSR, I think I prefer the older DSR 4x rather than the new DLDSR 2.25x. In the pathological cases (eg specular highlighting on bright gold floor grates when viewed from a distance causing serious aliasing) the standard DSR did a "better" job, which is to say it still breaks down pretty badly but it's the best-of-the-worst. Separately, SSRTGI is simply unavailable in the NVIDIA settings for this game; wtf?

Third: The Outer Worlds. Much the same as Prey, DLDSR seemed fine however DSR 4x seemed better in edge (ha) cases. And again, SSRTGI is not available to choose from. Just to triple-check, I went back to FoNV and SSRTGI is still available and is still working -- something with the game detection logic in the driver is apparently not permitting The Outer Worlds and Prey from picking up the SSRTGI functionality.

Which seems strange, because they were just showing off pictures of SSRTGI in Prey with DLDSR in this very thread. What gives?

Anyway, my $0.02 USD.

Gonna go try Skyrim, Fallout 4, and maybe one or two of the Borderlands next. Maybe I'll even get nostalgic and pull some HL:2 out :)
 
I did test Fallout 4 with both DLDSR and SSRTGI. Starting with DLDSR: I still agree it's a good general solve and the combination of IQ and performance of DLDSR 2.25 is good in most cases. However, in the pathological worst-cases, the older-school DSR 3x and 4x modes work better on high contrast specular highlights around alpha textures. Maybe a future release (like DLSS has enjoyed multiple iterations) can work more on these corner cases. Given the fillrate available to me on my 3080Ti and my limited 60Hz refresh rate monitor, the non-difference in performance between DLDSR 2.25x and DSR 4x makes the older DSR more worth it to me.

Now for SSRTGI: Fallout 4 already has an HBAO+ (high quality) / SSAO (low quality) option built into the engine, so I tested with HBAO off + SSRTGI on as well as HBAO on + SSRTGI on. To my eyes, the game looks best with both enabled. Using only SSRTGI seemed to miss some things which HBAO picked up, and HBAO alone doesn't provide (for example) nice directional shadowing such as point lights sitting in front of debris casting shadows. Also, same as my other SSRTGI attempts, this GI shader is crazy performance intensive. Even at native rez (2560x1440) with AA off, HBAO off, DSR/DLDSR off but with SSRTGI enabled beyond the medium setting results in a slideshow. However, a better point to be made: is anything more than Medium really necessary? Honestly there isn't much difference between Medium and High, and even more honestly I can't find any difference from High to Ultra. So perhaps Medium is "good enough", but even then the framerate will still dip below 60Hz on occasion, which is sitll pretty crazy.

Some day I'll dig out Skyrim and HL, but I've been having some fun playing (gasp!) Cyberpunk 2077! I did go test DLDSR and it does work, however even at the 1.6x scale it's enough of a perf hit that the 3080Ti can't keep up with everything at the max slider setting :)
 
Just installed the latest Nvidia driver to check out DLDSR. Why is there no 4x option? I just see 1.78x and 2.25x. Native monitor resolution is 1440p.

Edit: Ok tested with Far Cry 4 and it's pretty amazing.

DSR 4x (5K): 99fps
DSR 2.25x (4K) 0% smoothing: 140fps (capped), lots of jaggies and shimmering
DSR 2.25x (4K) 20% smoothing: 140fps (capped), better but jaggies and shimmering still very visible, smoothing adds blur
DLDSR 2.25x (4K): 138fps, can't tell the difference to DSR 4x, crisp as hell.

Same IQ as 4x super sampling at 40% higher performance. Not bad AI dude.
 
Last edited:
Can I use DLSS and dldsr at the same time and how the result compared to just using one of them vs native?

Like.. Does ultra performance mode DLSS with dldsr results in better IQ, maybe comparable to DLSS quality mode?
 
Can I use DLSS and dldsr at the same time and how the result compared to just using one of them vs native?

Like.. Does ultra performance mode DLSS with dldsr results in better IQ, maybe comparable to DLSS quality mode?

Probably not. DLDSR maxes out at 2.25x, DLSS ultra performance renders at 1/9th the output resolution. Combine them and you'll end up rendering at 1/4 native resolution which is the same as DLSS performance. DLSS quality will likely still win as it has more pixels to work with.

DLDSR improves downscaling quality, it doesn't help with upscaling.
 
Probably not. DLDSR maxes out at 2.25x, DLSS ultra performance renders at 1/9th the output resolution. Combine them and you'll end up rendering at 1/4 native resolution which is the same as DLSS performance. DLSS quality will likely still win as it has more pixels to work with.

DLDSR improves downscaling quality, it doesn't help with upscaling.
You can use DLSS to upscale from native into DSR oversampled resolution and then DSR will downscale from that back to native.
This way you're getting something akin to DLAA.
 
You can use DLSS to upscale from native into DSR oversampled resolution and then DSR will downscale from that back to native.
This way you're getting something akin to DLAA.

Sure but if you're upscaling from too low a resolution the end result can be worse than a higher quality DLSS setting.
 
Just to answer the question: Yes, you can enable DLDSR and DLSS at the same time -- I tried it with Cyberpunk 2077 and it worked fine. I mean, other than choking out the franerate.

Some day I'll dig out Skyrim and HL, but I've been having some fun playing (gasp!) Cyberpunk 2077! I did go test DLDSR and it does work, however even at the 1.6x scale it's enough of a perf hit that the 3080Ti can't keep up with everything at the max slider setting :)
 
Just to answer the question: Yes, you can enable DLDSR and DLSS at the same time -- I tried it with Cyberpunk 2077 and it worked fine. I mean, other than choking out the franerate.

For me it doesn't choke the frame rate (I'm sure it's lower than usual as it was a bit laggier but still above 60 fps as it still looks smooth), but results in incorrect aspect ratio and rain flickering.
 
The framerate change obviously depends on the settings you're using ;)

I have every setting at the highest, even DLSS is set to max quality, and the 3080Ti chews through it just fine at 2560x1440. However, it doesn't chew through it quite the same way with DLDSR at 1.7x... Frame rate dropped from being capped at the max 60Hz of my screen to something closer to the mid-40's.
 
I'm no luddite but all these scaling options are getting too complicated. There's a clear ranking between them but they are *all* better than vanilla monitor/bilinear scaling. And so I just want to set my quality options (draw distance, RT etc.) and a target framerate (60/120 or whatever), have the game probe my native display resolution and determine the best way to upscale/downscale to it from the highest render resolution it can manage to meet that framerate. Basically DRS but with downscaling as well (provided there's enough frametime budget), and an arsenal of algorithms.
 
I understand why your ask seems to make sense, however even without all the extra driver-applied featuers, framerate in any game is very fluid. The complexity here is this: you need to "guess" the draw time of the currently rasterizing frame, and if you expect to have lots of free time, then we should use stronger AA / downscale methods. But if the draw time is expected to take longer, then a weaker AA / upscale method needs to be employed. However, the downscaling / upscaling side of this decision tree presumes a change in render resolution, which might already be started as it relates to persistent buffers for other screen-space effects.

This can result in pathological cases where quick movements end up with (what might look like) a sort of bad video motion compression artifacting. Imagine a scenario where, if you stand still and look at a thing in-game, it's gorgeous and clear. But then if you rapidly turn around or an event occurs in front of you which results in a draw time increase as asset loads and/or shader complexity briefly but significantly increases, then the screen has to raplidly shift into lower rez + upscale -- but then can "work back" into a highe rrez + downscale feature after the rendering engine has caught up to a steady state again.

These are very complex things, and without a lot of hand-tuning, it has a lot of potentially nasty pitfalls.
 
I wonder if there's enough information in the scene + present state + recent user inputs to predict how heavy the next frame is going to be. Of course the prediction algorithm latency would have to be < 16ms. But precisely how early do you need to know the answer? I'm not thinking about the latency of the upscaling algorithm itself, that's a known quantity. I'm asking about how early you need to know the resolution of the render target relative to the start of the 16ms window. The prediction has to arrive before that deadline.

Present-day DRS systems would be subject to that deadline too, but they don't have any other constraints because as you said the upscaling itself is near-0 cost.

If the dynamic thing is theoretically impossible I think a static option may make sense, i.e., based on prior statistical gameplay analysis, conservatively set the render resolution + upscaling algorithm to hit the fps target (maybe ~99th percentile). Somewhat like GFE I suppose.
 
Back
Top