Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

2x2 then 3x3.

How would you do the other scaling, 2x3 or 3x2? I wouldn't think they'd want to go for fractional upscaling.

But they already do fractional upscaling, right?

For 4K output:

Quality Mode is 1440P -> 2160p
Performance Mode is 1080P -> 2160p
Ultra Performance Mode is 720P -> 2160p

So Quality is 1.5x scaling per dimension.
So 3x maybe it is just the next logical bump down in their minds. Or it met a performance target. Or maybe a fractional scaling factor had more image quality problems or something. My guess is that 2.5x per dimension wouldn't have been enough speed boost over the 2x of Performance mode to be worthwhile.
 
It is not integrated into main branch UE4. You have to request a separate branch access from Nvidia, then if you get approved they'll allow you to get it.
If you are talking about github.com/NvRTX/UnrealEngine that has the DLSS (and caustics) branch, then all I had to do to get access was to link my Epic account with my Github account if memory serves. Definitely no manual requesting from Nvidia was required, just a bit of clicking.

https://developer.nvidia.com/unrealengine

Btw, the DLSS "Ultra quality" mode mentioned in Github isn't actually active yet.

And fwiw Ultra performance mode can be used with any output resolution in older UE4 DLSS 2.0 titles like Deliver Us the Moon by replacing the original DLSS 2.0 nvngx_dlss.dll with a newer DLSS 2.1 file and using console command r.NGX.DLSS.Quality 3. This did not work with Wolfenstein btw.
 
I wonder what drives the currently available ms window for DLSS? i.e. is this the best possible algorithm they can make and it simply takes this long? Or could the produce significantly higher quality given more time to do the upscale, and they limit it to the current window to make it practical on slower GPU's like the 2060?

If it's the former then that opens up an interesting possibility. What about a DLSS 3.0 mode that's exclusive to Ampere and mandated to work in async mode? In that case you could perhaps allocate say 8ms for the 3070 (which would use about 1.5ms with DLSS 2.0 at 4k) which by virtue of running asynchronously to shaders and RT would be basically free up to 120fps. It would be free to even higher framerates on faster GPU's.

Would 8ms be enough to produce a high quality ultra performance mode output? Or go even further? I'm sure it wouldn't be done this way, but just as an example, 8ms would still be well over double the time that you'd need to upscale a 1080p impage to 4k, and then upscale that image again to 8k.
 
Probably, but with that quality I'd say it's not really doing a suitable job for 8k anyway.
Doesn't the quality depend on having enough 8k samples when you do the training? I would imagine the more generic 8k images used for training the more accurate the up-scaled image will be.
 
Doesn't the quality depend on having enough 8k samples when you do the training? I would imagine the more generic 8k images used for training the more accurate the up-scaled image will be.

Maybe they can improve it over time, but as it stands now the ultra performance output is a LONG way off native resolution with TAA.
 
nVidia has announced more games coming with DLSS in the next months:
Mount & Blade II: Bannerlord
Mortal Shell
Xuan-Yuan Sword VII
Edge of Eternity
Ready or Not

Enlisted
Pumpkin Jack

More here: https://www.nvidia.com/en-us/geforce/news/october-2020-rtx-blockbuster-and-indie-game-update/

Marvel Avengers was updated last week. That are ~20 games with DLSS 2.x within one year.

The uptake does seem fairly decent. Also given that there are some big names supporting it in the first tranche of "next gen" games like CP2077 and Watch Dogs Legion. It seems to bode well. I'm still hoping they find some magic way to may it game agnostic though.
 
DLSS actually improves image quality, and quite alot too. I think DF mentioned this in one of their videos before. Impressive, since your gaining so much performance.
Usually that just means the TAA implementation is from the worst end of the spectrum.
In this case, I can't understand why anyone would prefer the oversharpened mess of halos that's DLSS in that video. You know it has to be bad when oversharpening is obvious even in YouTube quality.
 
Usually that just means the TAA implementation is from the worst end of the spectrum.
In this case, I can't understand why anyone would prefer the oversharpened mess of halos that's DLSS in that video. You know it has to be bad when oversharpening is obvious even in YouTube quality.

I really don't see what you're talking about. All I see is an apparently higher resolution image. Digital Foundry and pretty much every other review I've read of DLSS all seem to agree. As long as you're not in Ultra performance mode, the image quality is as good or better than native with TAA.
 
Should we all prefer oversharpened mess of TAA over oversharpened mess of DLSS because... wait, I don't know why.
No, but I'd imagine most people would prefer settings which don't take sharpening too far regardless of what methods are used.
I really don't see what you're talking about. All I see is an apparently higher resolution image. Digital Foundry and pretty much every other review I've read of DLSS all seem to agree. As long as you're not in Ultra performance mode, the image quality is as good or better than native with TAA.
Sure it can be better than bad TAA, but you still need to ignore all the artifacts it causes (and yes, it does cause them no matter if they pop into your eyes or not)
 
No, but I'd imagine most people would prefer settings which don't take sharpening too far regardless of what methods are used.
Good news then - most modern DLSS games allow the user to control the amount of sharpening applied to DLSSed image.
In fact, I'd even say that a higher percentage of DLSS enabled games have this option when compared to games which use TAA.
 
Back
Top