Nvidia DLSS 1 and 2 antialiasing discussion *spawn*


Oh wow, just as we are discussing it :p Obviously, that was bound to happen, resolution scaling so only reconstruction when needed.

Yeah 1.0 only supported fixed resolutions but 2.0 should work with any resolution.

DLSS1.0 wasnt all that exciting, they made huge improvements in all aspects with 2.0, and they seem to update it for the better with every revision. That DSR/DLSS combo is a gamechanger imo.
 
Oh wow, just as we are discussing it :p Obviously, that was bound to happen, resolution scaling so only reconstruction when needed.
I don't think it necessarily means DLSS will turn off when not needed, just that it supports the internal resolution dynamically changing.
 
Oh wow, just as we are discussing it :p Obviously, that was bound to happen, resolution scaling so only reconstruction when needed.

It's a feature of DLSS2.1 so it's been available for a few weeks now. Seems no-one has implemented it yet though.

I don't think it necessarily means DLSS will turn off when not needed, just that it supports the internal resolution dynamically changing.

Yeah they haven't explained in detail how it works but I'd assume the DLSS only kicks in when the engine needs to drop below the target frame rate to meet the target FPS. In theory it's a pretty slick combo but when you consider that DLSS quality (66% of native res on each axis) can sometimes produce superior results to native. Then what happens when you get to 75%, 80%, 90%?? Surely DLSS is going to start looking better than native at that point in which case you're going to lose image quality when you actually hit native resolution and it turns off! Unless they continue to add the SS element of DLSS even then and just turn off the up-res aspect. Damn that could look seriously good!

Come to think of it, I wonder why that isn't an option already?
 
Yeah they haven't explained in detail how it works but I'd assume the DLSS only kicks in when the engine needs to drop below the target frame rate to meet the target FPS. In theory it's a pretty slick combo but when you consider that DLSS quality (66% of native res on each axis) can sometimes produce superior results to native. Then what happens when you get to 75%, 80%, 90%?? Surely DLSS is going to start looking better than native at that point in which case you're going to lose image quality when you actually hit native resolution and it turns off! Unless they continue to add the SS element of DLSS even then and just turn off the up-res aspect. Damn that could look seriously good!
Yeah given how much DLSS can change the resulting image (usually to the positive), having it turn off suddenly when you're meeting target resolution would be a very bad idea.
 
And DSR already works with DLSS!
Why has no one done those kind of image comparisons? Having the internal resolution set higher than native then applying DLSS sounds like a great idea when you have fps to spare.
 
A combination of Dynamic Resolution Scaling, DLSS and VRS will give a huuuuuuuuuuuuge jump in performance and framerate stability. And for regular people who are not aware of any of these techniques, DLSS+VRS+DRS could just be auto-on and target the framerate of the displays refreshrate. Boom, simple as that.

I am a bit surprised actually that isn't an option in Cyberpunk now, as it does run the DX12U API and also has an Auto DLSS setting (sadly it's just fixed modes, DLSS Performance for 4K, DLSS Balanced for 1440p, and DLSS Quality for 1080p. Pretty lame)
 
But with VRR what is the Display's Refresh Rate?
 
Obviously the higher the better. But for academic purposes of the discusion....

If a Display supports Variable Refresh Rate range between 40 Hz - 120 Hz, what should the game target? If you were to design an auto-tune system, do you stick to 60 Hz or 120 Hz first and scale other items around hitting those rates or since it can do anywhere from 40 -120Hz do you keep consistent on the DLSS+VRS+DRS and drop the frame rate a tiny bit? Would 57 Hz be enough (it's only a 5% variance) for the taxing scenes?
 
A combination of Dynamic Resolution Scaling, DLSS and VRS will give a huuuuuuuuuuuuge jump in performance and framerate stability.

DRS and DLSS aren't additive though. Not in the current implementation anyway. In the context of DRS, the point of DLSS wouldn't be to increase frame rates, but rather to stabilize image quality.

So lets say the frame rate target is 60 fps. DRS would handle achieving that target while DLSS would mitigate the image quality loss of the lower resolution by maintaining a stable output resolution. Consoles do the same thing now but using standard upscaling rather than DLSS.

I guess another way of looking at it is to think of it as dynamically adjusting the DLSS quality factor based on a fixed frame rate.
 
I don't have Cyberpunk to test it, but my friend tried this out and says it's an improvement. Basically using nvidia profile inspector to change mip bias to -3. Seems like Nvidia recommends games do this when implementing dlss, but cyberpunk may not be making that change.

https://wccftech.com/heres-how-to-improve-cyberpunk-2077-texture-sharpness-when-using-nvidia-dlss/

Edit: Funny, I followed the reddit link, and it points to @OlegSH posting in this thread lol, The answer was within us all along.

https://forum.beyond3d.com/threads/...g-discussion-spawn.60896/page-62#post-2178980
 
Last edited:
I don't have Cyberpunk to test it, but my friend tried this out and says it's an improvement. Basically using nvidia profile inspector to change mip bias to -3. Seems like Nvidia recommends games do this when implementing dlss, but cyberpunk may not be making that change.

https://wccftech.com/heres-how-to-improve-cyberpunk-2077-texture-sharpness-when-using-nvidia-dlss/

Edit: Funny, I followed the reddit link, and it points to @OlegSH posting in this thread lol, The answer was within us all along.

https://forum.beyond3d.com/threads/...g-discussion-spawn.60896/page-62#post-2178980

I hadn't realised that DLSS does nothing for texture resolution.

The mip bias change seems like a pretty effective one although the optimal setting would likely be different depending on your output res and quality factor. -3 may not be best for all scenarios.

Also it be interesting to see how this impacts performance which may reduce a little and especially VRAM usage which should definitely go up. Well worth doing if you can afford it though.
 
Rendering at lower resolution usually affects selcted MIP levels and object LOD. That's one of the uphill battles DLSS has to fight.

I'm not clear, though, on whether DLSS is using the selected resolution in-game and overrides it or if it's integrated enough in the pipeline to do this after LOD and MIP levels are selected.
Judging from most pictures comparisons I've seen, I think that DLSS is taking effect too late in the rendering pipeline to take effect.

It might also be intentionally designed this way, for twofold reasons. Especially earlier implementations had a fairly high cost of entry and with lower LOD in particular, you shave more off of the general frametime not only on the pixel-based part. Also, Nvidias engineers might have had enough confidence in their AI's ability to "reconstruct" (better: to guesstimate, since there's nothing to re-construct as in unused pieces from in the first place) texture and other details. In the meantime, this seems to work pretty good with details as power or phone lines, but is limited on actual hi-freq content other than on-screen texts (letters and numbers).
 
I believe DLSS currently breaks most particle based effects like in death stranding. Maybe they can render those effects separately as a final layer at full resolution. as an ultra-ultra setting maybe (even though the costs should not be that much)
 
Back
Top