Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I agree that we need more advanced AA method than TAA, but DLAA (as DLSS) is locked to nV.... but we need some IHV and platform agnostic method.

So I'm not excited at all because nV will use DLAA just for marketing purposes :(
 
Because the point of DLSS is to provide performance benefits at close to native image quality. Such mode would provide AA but at performance cost. Possibly it wouldn't be anything special over your typical TAA in either of these.

And it wouldn’t help with adoption of RT.
 
Because the point of DLSS is to provide performance benefits at close to native image quality. Such mode would provide AA but at performance cost. Possibly it wouldn't be anything special over your typical TAA in either of these.
I think it should probably provide a nice benefit over normal TAA - as it will not over blur or ghost as much. It's cost will depend on the GPU in question - ranging from very cheap on a 3090, to not so cheap on a 2060.
 
DLAA is out today on ESO test servers.

https://www.nvidia.com/en-us/geforce/news/windows-11-game-ready-driver/

NVIDIA DLAA (Deep Learning Anti-Aliasing) is a new AI-based anti-aliasing mode for users who have spare GPU headroom and want higher levels of image quality. DLAA uses the same technology developed for DLSS, but works on a native resolution image to maximize image quality instead of boosting performance. The first implementation is available today on the Elder Scrolls Online test servers.

I hope this comes to UE soon.
 
Content creation with jitteted Fov and compressing motion vectors could be challenging.
No need for both. Video compression relies upon key frames rather than camera jittering and there is optical flow instead of motion vectors.
Jittering is a real time graphics thing because rendering every second, third or fourth frame at full resolution like in video compression would unavoidably cause stuttering and make a game unplayable.

There are other methods which do allow some nice super resolution.
Temporal incoherence will be a hard thing to overcome with GANs, though there is a progress in this field too - https://nvlabs.github.io/alias-free-gan/
 
NVIDIA declared DLSS to be available in more than a 100 games, courtesy of the DLSS plugin in Unreal and Unity engines, dozens of indie games enabled DLSS afterwards through game updates or at launch.

People no longer can trace the count of DLSS games easily as games are implementing DLSS without mentioning that in press releases or developer announcements.

https://www.nvidia.com/en-gb/geforce/news/september-2021-rtx-dlss-game-updates/
 
I'm not sure i understand.
Video codecs use temporal coherency to store only a small fraction of frames in high resolution, so temporal reconstruction is already in play for the video.
DLSS style reconstruction is all about combining pixels from multiple frames, so it would not work for the video type of content because the same principles are already used for compression, i.e. it's like applying DLSS on top of another DLSS)

I'm talking about low-res video like older DVD stuff or TV shows like Babylon 5.
Temporal reconstruction won't work without jittering (or higher res keyframes) and GANs (spatial upscaling via generators networks) have problems with temporal coherency and the details they hallucinate will never match reference higher resolution details.
That's a fundamentally ill-posed problem, though it might look ok with temporal coherence. Also, that's like coloring the old films, result will never match the original filmed content and will likely look highly unnatural.
 
Video codecs use temporal coherency to store only a small fraction of frames in high resolution, so temporal reconstruction is already in play for the video.
DLSS style reconstruction is all about combining pixels from multiple frames, so it would not work for the video type of content because the same principles are already used for compression, i.e. it's like applying DLSS on top of another DLSS)


Temporal reconstruction won't work without jittering (or higher res keyframes) and GANs (spatial upscaling via generators networks) have problems with temporal coherency and the details they hallucinate will never match reference higher resolution details.
That's a fundamentally ill-posed problem, though it might look ok with temporal coherence. Also, that's like coloring the old films, result will never match the original filmed content and will likely look highly unnatural.
I see now what you mean, I do not agree though.

Video compression relies in part on higly compressed B-Frames, less compressed P-frames and barely compressed I-Frames, the "ground truth" of video, so to speak. Higher quality video often uses I-Frames more often or uses less destructive compression on them. Highest quality video (apart from umcompressed ofc) would use all I-frames.

Video compression's main goal is to save storage space, while still being decompressable by dedicated logic. A lot of trade-offs made, sacrificing quality.

What I was thinking is: An neural-net based decompressor could, in theory, use the best material inside a compressed video stream (i-frames) and use those to extrapolate detail instead of P- and B-frames. P- and B-frames could still be used in order to check, if the guesses of the neural net are on the right track for this particular sequence.

And since we're talking about massively higher processing power than in smartphone chips for example, they could use much larger data sets (frame counts) to check and refine their findings.

Video compressors can use information from the past and also the future. We can’t do that with real-time image reconstruction in interactive applications.
Wouldn't it be much easier for reconstruction to know in advance what's coming, where the motion is heading to, what the next ground truth image inside a particular video stream looks like? Sure, it's not a 1:1 port of DLSS but for my layman's mind it sounds like a much easier target to achieve.
 
The ESO PTS is out, and I made a video compares between three different AA options: TAA, TAA+DSR (4x), and DLAA.
I uploaded the video on YouTube here:


Unfortunately the video quality doesn't seem to be very good (not sure if it's going to be better later), so I put the video file (~188MB) in my Google drive here:


Personally I think DSR 4x still produces the best result (DSR 4x+DLAA is probably a bit better than DSR 4x+TAA), but for people who don't want to lose performance to DSR 4X, DLAA is a bit better than TAA.
 
The ESO PTS is out, and I made a video compares between three different AA options: TAA, TAA+DSR (4x), and DLAA.
I uploaded the video on YouTube here:


Unfortunately the video quality doesn't seem to be very good (not sure if it's going to be better later), so I put the video file (~188MB) in my Google drive here:


Personally I think DSR 4x still produces the best result (DSR 4x+DLAA is probably a bit better than DSR 4x+TAA), but for people who don't want to lose performance to DSR 4X, DLAA is a bit better than TAA.
May I spread the word & file link?
 
Fairly certain a more in depth DLAA analysis will appear on review sites shortly.
 
Back
Top