Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

lol I like how they focus on "increasing the resolution" rather than mention that it's decreased initially.

So when DLSS is disabled due to frame times being so low, does a game fallback to TAA?
 
How bad the whole dlss thing is, the idea of the tech is nice i think, letting a supercomputer take tasks like that, perhaps can be applied to other features? NV probably refines the tech though for super sampling.
 
DLSS Explained (super dumbed down explanation with a pinch of PR Boogaloo)
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
There we have it.
Exactly what I imagined they were doing. No "on-the-fly" training to speak of, nor sophisticated use of other deferred rendering buffers. They probably only use motion vectors to improve temporal stability. Other than that, it's just AI hallucinating detail based on final color alone.
While the more outrageous ideas here sure are interesting, they are the kind of thing each dev would need to consider on their own, for each game. Nvidia wanted a drop-in solution, and that's how they marketed this from the beginning. You can't just make quickly implementable plug-in that relies on too many specifics of how the frame is rendered.
 
I like KimBs answer. And as per Milk’s commentary I’ve suspected that nvidia DLSS is such. I expect we will get something better when DML is released and developers have control over how the ML pipeline affects the rendering pipeline. As opposed to just a packaged applied after the frame is complete.
 
The best upscaling may well lie with hybrid ML and reconstruction techniques. That could simplify the ML requirements is working out which bits to render and reconstruct, and allow more creative reconstruction. This is an area that needs to be explored by game devs uniquely, as offline imaging has zero need or ability to use rendering data. It contrasts with raytracing developments where the same tech is fundamentally the same (but of course hybrid rendering will be game-dev led).
 
A critical review of DLSS in BFV


tl;dw
  • DLSS provides a very blurry experience
  • Quality is far below simply render scaling + TAA at the same perf level
  • Far too restricting implementation tied to DXR and resolution/gpu combinations
  • Brings nothing to gaming rendering as opposed to raytracing
  • Goes as far as recommending it be removed from BFV entirely as gamers will be using it based on marketing from Nvidia and ending up with a terrible experience compared to normal TAA
 
  • Goes as far as recommending it be removed from BFV entirely as gamers will be using it based on marketing from Nvidia and ending up with a terrible experience compared to normal TAA
That's a bit overboard. Options are always good and players have the option to turn it off. If you don't want gamers being 'duped', disable it as the default option and have gamers have to enable it in settings.

The video itself talks about points raised with other RTX features. They didn't want to compare FFXV's TAA solution because they felt it was poor versus other games, similar to Metro's non-RT lighting being poor versus other games. There is always going to be a conflict of interests when an IHV wanting to sell a new proprietary tech works with devs on an engine to implement that tech. We need data from independent parties who nVidia isn't supporting directly.

Another useful point that time cost of DLSS. It's fixed per frame, and at higher performance costs more than rendering which is why DLSS is disabled on BFV on 2080 at 1080p - it's faster to render a frame than upscale. That suggests to me that DLSS is relatively slow - it's not a couple of ms.

These images show DLSS really struggles...

Image1.jpg
Image2.jpg

1685p gets you the same framerate at significantly higher quality.
 
Seeing the softer textures in the DLSS analysis cited in the post above made me wonder about the methodology of DLSS.

It looks like they are running the full scene at the lower resolution with the corresponding low res mipmaps. A neat thing that Epic wrote about with TAA is that the TAA can actually run the mipmaps at the post-upscale target resolution (e.g. 4K mipmaps instead of 1885p), which is often why the TAA scenes look more crisp. Normally, running the scene with this negative LOD bias would result in texture shimmer because the lower initial render resolution would be undersampling the mipmaps creating aliasing artifacts. However, the pseudo-samples from the previous frame and the noise-reduction from the TAA filter let you 'get away with' using the higher detail mipmaps without shimmer.

nVidia made a recent statement seem to indicate that they are not using any temporal information to compute DLSS, so they can't gain information in the image via sample reuse and can't don't benefit from the temporal jitter/shimmer reduction which should limit how much sharpening they would want to do.

I suspect that under the hood DLSS works a lot like MLAA/FXAA, except instead of just looking for pre-determined contrast edge shapes, the machine learning is used to find the the characteristics of areas of the image which contain aliasing artifact, then find the blend instructions which would lower the error values of those artifacts the best way possible. I wonder how large of a search area they are using around each pixel. The main issue with this method is that it is impossible to construct more information than exists in the original image (which is rendered at the lower resolution). This works OK at 4k, but I assume that one of the reasons low resolution 1080p DLSS seems not to be widely available is because it would become readily apparent that the image contains too little information if it is constructed from an upsampled 720p-900p image.
 
Seeing the softer textures in the DLSS analysis cited in the post above made me wonder about the methodology of DLSS.

It looks like they are running the full scene at the lower resolution with the corresponding low res mipmaps. A neat thing that Epic wrote about with TAA is that the TAA can actually run the mipmaps at the post-upscale target resolution (e.g. 4K mipmaps instead of 1885p), which is often why the TAA scenes look more crisp. Normally, running the scene with this negative LOD bias would result in texture shimmer because the lower initial render resolution would be undersampling the mipmaps creating aliasing artifacts. However, the pseudo-samples from the previous frame and the noise-reduction from the TAA filter let you 'get away with' using the higher detail mipmaps without shimmer.

nVidia made a recent statement seem to indicate that they are not using any temporal information to compute DLSS, so they can't gain information in the image via sample reuse and can't don't benefit from the temporal jitter/shimmer reduction which should limit how much sharpening they would want to do.

I suspect that under the hood DLSS works a lot like MLAA/FXAA, except instead of just looking for pre-determined contrast edge shapes, the machine learning is used to find the the characteristics of areas of the image which contain aliasing artifact, then find the blend instructions which would lower the error values of those artifacts the best way possible. I wonder how large of a search area they are using around each pixel. The main issue with this method is that it is impossible to construct more information than exists in the original image (which is rendered at the lower resolution). This works OK at 4k, but I assume that one of the reasons low resolution 1080p DLSS seems not to be widely available is because it would become readily apparent that the image contains too little information if it is constructed from an upsampled 720p-900p image.
Been thinking about the negative mipmap as well, part of the blurrier result may indeed be from teaching DLSS with target resolution mipmap range instead of what would be appropriate for 64xSSAA. (And thus loosing what would be the proper look for 64xSSAA for each pixel.)
DLSS seem to be very good in reducing dithering and such, so it could work decently with bit grainy image as source as well.
 
The video itself talks about points raised with other RTX features. They didn't want to compare FFXV's TAA solution because they felt it was poor versus other games, similar to Metro's non-RT lighting being poor versus other games. There is always going to be a conflict of interests when an IHV wanting to sell a new proprietary tech works with devs on an engine to implement that tech. We need data from independent parties who nVidia isn't supporting directly.

But we can compare it when the solution is worse than traditionell console features?
FF15 and Metro are designed for the current console generation. Maybe 2013 tech isnt up to date anymore...
 
Makes me think that they're using the wrong metric to train DLSS. If they're using absolute error from the ground truth high resolution image, a blur filter is best because if you try to reconstruct fine details, they'll be wrong at the pixel level, even if they perceptually have the same texture. Think what happens to per pixel error if you take a high frequency texture, and shift it over by one pixel, for instance.

What they need is a filter that "guesses" and tries to reconstruct texture, even the result doesn't match pixel for pixel to the ground truth. Maybe perform a Fourier transformation or something and measure against that? It needs to be correct in spectral space more than absolute pixel space.

Oh, and the reconstruction needs to be consistent frame to frame or it'll shimmer like crazy. That's another metric the training has to keep track of.
 
Thats why i thought dlss, or some form of it could be a thing for consoles. Doing reconstruction with compute, RT with compute, and next gen graphics, maybe ai too. Everything compute and we have current gen gfx with new features at atleast 30fps.
Things like reconstruction being handled on an external supercomputer is an idea atleast.
 
Thats why i thought dlss, or some form of it could be a thing for consoles.
One can't determine that until it's proven itself.
Doing reconstruction with compute...
Reconstruction on a 1.8TF PS4 to Spider-Man/HZD quality takes a few ms. DLSS to inferior quality standards on 2080 takes longer than to render a 1080p frame, which is something like 140fps DXR off, so 7ms. Much longer with RTX enabled. There was no reason (evidence) to think DLSS is better than reconstruction; you should wait before jumping to conclusions about what are good ideas for consoles. ;)

Everything compute and we have current gen gfx with new features at atleast 30fps.
Well they won't be current gen graphics if they have raytraced lighting. However, this thread is about DLSS and determining how well it performs both in time and quality, and seeing how that changes over time if the algorithm/training advances. DLSS is a very interesting tech and a whole new paradigm being applied to realtime graphics. As I mentioned before, perhaps ML assisted reconstruction is the ideal? It'd also be nice to hear how PS4's ID buffer helps at all, if any. The issue at the moment seems to be the ML not creating any surface detail, but the source material isn't that blurry. Worst case, DLSS should look like 1080p upscaled. That with crisp edges (being recovered okay) would be a notable improvement.
 
Perhaps, the interesting part is whether DICE and 4A Games is hands off at this point. I likely think so, given that this is a driver based solution. It will be on nvidia to find a way to train to make the results better than TAA.
 
As I mentioned before, perhaps ML assisted reconstruction is the ideal?

DLSS is ML assisted reconstruction. That's exactly what it does under the hood. That's also why the Tensor Cores are so important to it. Evaluating neural networks in vanilla compute is too slow to hit the performance target they need for it to actually be useful.
 
DLSS is ML assisted reconstruction. That's exactly what it does under the hood. That's also why the Tensor Cores are so important to it. Evaluating neural networks in vanilla compute is too slow to hit the performance target they need for it to actually be useful.
I think he's referring to how it's being reconstructed. With DirectML, developers have full control over what the ML does in the pipeline and when. With Nvidia DLSS, the solution could be entirely blacked boxed from developers and we're seeing a post processing reconstruction using ML.
 
DLSS is ML based construction, not ML assisted. ML assisted would be combining ML alongside 'checkerboard' reconstruction, using both techniques to regenerate the missing data as appropriate. For example, off the top of my head, ML could combine with actual texture data, or maybe be applied across temporal samples where it might well do a better job than algorithms in extrapolating or predicting data.
 
Perhaps, the interesting part is whether DICE and 4A Games is hands off at this point. I likely think so, given that this is a driver based solution. It will be on nvidia to find a way to train to make the results better than TAA.
That's assuming DICE/4A has provided all the source screens already. Keep in mind that ultrawide resolutions aren't even an option yet since DLSS needs to be trained on every single resolution separately and ultrawide hasn't been done. Whether it's in a queue at Nvidia or DICE hasn't provided non-standard resolutions yet, who knows.

And what happens when there's a significant change in rendering, enough to make the existing DLSS training obsolete and they need to provide ALL the resolution sources again to Nvidia for re-training.
 
Last edited:
Back
Top