Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Baldur's Gate patch released today includes DLSS 2.3.
October 14, 2021
 
Deathloop adds DLSS support with Update 1

Seem to be the first implementation of v2.3.0.0 as well.



I remember injecting SMAA with Reshade because game's own implementation was rather poor, so you're likely right.

- DLSS works at native res with Adaptive AA, effectively DLAA (amazing)

I absolutely second this. This really is awesome. I remember musing in here what would happen at native or near native res when DLSS is running alongside DRS and whether DLSS would turn off at native res resulting in a sudden reduction in image quality. Clearly the answer is no and instead image quality just keeps getting better and better as internal res goes up as you would expect with VRS. The only difference here is that you're still getting pretty similar to native 4k image quality all the way down to 1440p or lower. DLSS is the gift that just keeps giving!
 
Deathloop dlss \ fsr
I can understand why many sites and techtubers etc would continue comparing DLSS and FSR. What I don't understand is why it is still compared here. It was understood well before FSR was released that they're different technologies, with DLSS being obviously significantly superior end results due to the AI trained reconstruction techniques that have been refined over the last few years, vs a relatively simple (compared to DLSS) upscaling.
 
t was understood well before FSR was released that they're different technologies, with DLSS being obviously significantly superior

No idea if theres still users here thinking the opposite? If not, does it mean that if one tech is superior to the other no one might share comparisons anymore?
Its possible, some mobile reviewers stop comparing Apple A series SoC's to qualcomm/exynos etc due to the differences.
 
I can understand why many sites and techtubers etc would continue comparing DLSS and FSR. What I don't understand is why it is still compared here. It was understood well before FSR was released that they're different technologies, with DLSS being obviously significantly superior end results due to the AI trained reconstruction techniques that have been refined over the last few years, vs a relatively simple (compared to DLSS) upscaling.
I think it's more some AMD hardhats here refuse to acknowledge that the two are different, or that DLSS offers a superior visual result. Not sure but FSR's pedigree might also fan the flames, as well as DLSS not being open source.
 
I can understand why many sites and techtubers etc would continue comparing DLSS and FSR. What I don't understand is why it is still compared here. It was understood well before FSR was released that they're different technologies, with DLSS being obviously significantly superior end results due to the AI trained reconstruction techniques that have been refined over the last few years, vs a relatively simple (compared to DLSS) upscaling.
They do the same thing and aim at the same result. The fact that the technologies are different doesn't mean anything, and doesn't stop AMD from blocking games they sponsor from getting DLSS.
 
Judging from their quality comparison shot on page 3, DLSS Quality in 4K shows it's reconstructive strength in the chain link fence in the background. But at the same time it introduces quite a bit of oversharpening (tracks, grates, shrubs).
upload_2021-10-16_20-24-53.png

Do they provide the original screenshots somewhere? On the page i could only find 800x450 sized snippets and I would not want to do DLSS injustice for what could also be a compression artifact on their website's images.
 
That FSR Balanced vs DLSS Performance comparison is crazy.
Temporal based upscaling will produce essentially a native image if it's completely static even if the actual rendering res is very low.
These higher performance tiers in case of DLSS should be (somehow) compared in motion and attention should be payed to things which aren't easily reconstructed this way like RT resolution and post processing effects.
But otherwise yes DL DLSS implementation is easily up there with Death Stranding and Control, borderline "better-than-native".
 
Temporal based upscaling will produce essentially a native image if it's completely static even if the actual rendering res is very low.
I wish this were true, but unfortunately it is not. Even with no motion whatsoever, camera jitter alone can make undersampled regions of the image sparkle like crazy and it is not a given to be able to correctly reconstruct them, even if nothing else changes.
In other words the image can change entirely from one frame to the next in specific regions and telling whether one should be rejecting information from the past or retain it can be very hard.
 
Even with no motion whatsoever, camera jitter alone can make undersampled regions of the image sparkle like crazy and it is not a given to be able to correctly reconstruct them, even if nothing else changes.
DLSS certainly doesn't have any troubles with reconstructing undersampled wires, fences and other subpixel stuff at the very least with medium speed motion when motion vectors are present, to my surprise it always does a better job at fast moving trees branches in comparison with native TAA and that is already insane.
You know the main issue with classic TAA is that neighborhood color clipping is very lossy by design and that's the main reason why undersampled regions do not converge with TAA and can flicker like crazy, but I didn't notice the same with DLSS which doesn't use color clipping (though, messed up jittering, stochastic sampling, low res post processing and texture lods can easily break DLSS).
 
Last edited:
I’ve seen a few complaints of bad ghosting and artifacts with DLSS in Deathloop. Nothing from media outlets though.
 
It's obviously impossible to reconstruct an arbitrarily undersampled signal and it doesn't matter how good this or that algorithm is, eventually it stops working. If that wasn't the case we could reconstruct a whole image from 1 sample, which I can't deny it would be rather sweet...
 
Nobody argues with that for moving camera (still image should converge if everything is adjusted for temporal sampling), there will be losses in the occluded areas unless we account for the undersampling in those areas with higher density shading or with clever tricks, wonder whether current VRS (too coarse?) or primary RT rays can be performant enough to sample those areas at higher resolution (integration would be a nightmare I can imagine, so likely a no go).
Resampling with temporal accumulation would add up to the losses in comparison with some ideal spatially supersampled image, and obviously the lower sampling rate, the less information can be reconstructed precisely, so more advanced resampling filters and accumulation strategies are needed for reconstruction from lower resolution.
So the difference in image quality between DLSS and TAA ultimately comes down to which one has less lossy resampling, accumulation, etc, not just to the sheer number of input pixels, that's why DLSS often trades blows with higher res image with TAA.
 
Back
Top