Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Are there numbers on the costs of running the NN? PS4 Pro reconstruction seems to be about 2ms on a 3.6 TF console. Actually I can't find ms numbers for the upscale process. But anyway, PS4 Pro can upscale from half res to 4K in a few ms. The same upscale time will be lower on a faster GPU, so should be all of 1ms on a fast GPU. I'd want an AI solution to be as fast as that or significantly beneficial in some other way if slower than that.
It has to be faster and better for it to be worth it.
Well at first glance, I thought that way.
But I suspect the cost of implementing upscaling/AA solutions may not be so trivial labour wise.
 
One advantage on traditional methods is that they are easily tweakable, programmers can do small changes and see effect immediately.
For DLSS it apparently needs to be retrained when game has changed enough and training takes time.
I don't think so. Training seems to occur on the fly.

Games may benefit, but it probably isn't needed.

Also there hasn't been single DLSS game which uses dynamic source resolution and is something that is quite easy to do with temporal upsampling/injection.
What do you mean by dynamic source resolution?
 
So for training they most likely use velocity, color, normal?, specularity? etc and target is 64xSSAA color. (perhaps color of previous frame?.)
Target is most definitely not the color buffer itself. Just the number of samples to take.
 
I don't think so. Training seems to occur on the fly.
Sorry but I don't think you understand how DLSS works. Training is a significant stage between the developers and Nvidia on their massive image processing ML systems prior to actual implementation.
 
It has to be faster and better for it to be worth it.
Well at first glance, I thought that way.
But I suspect the cost of implementing upscaling/AA solutions may not be so trivial labour wise.
They're not trivial, but DLSS isn't trivial either and has a tiny audience. Putting it another way, what would your arguments in favour of including DLSS in a AAA game be if you were asked in a studio? Or would you either suggest no reconstruction, or working on a non-DLSS reconstruction method with a view to integrating that into future games as a matter of course?
 
They're not trivial, but DLSS isn't trivial either and has a tiny audience. Putting it another way, what would your arguments in favour of including DLSS in a AAA game be if you were asked in a studio? Or would you either suggest no reconstruction, or working on a non-DLSS reconstruction method with a view to integrating that into future games as a matter of course?
I don't have enough information to make the call. You're talking about integration of a new type of super sampling/AA method vs the cutting edge of what the industry has been moving towards over a decade. I think it makes sense to skip Nvidia DLSS unless you're paid a large sum of money to do it just from a business perspective.

I do not know what the limitations are of a AI Up-Rez pipeline, since I suspect there could be many ways to do it.
 
That's debatable. It's certainly blurrier in the examples you've provided. But it also certainly doesn't have missing information like the 75% raw scaling has. Check out the powerlines in the back.
If we were to approach 'worse' empirically, we'd measure %age deviantion from ground truth. A quick eyeballing shows DLSS is fairing far worse there. It's losing all the high-frequency detail. It would appear the AI can make sense of what information is needed, but cannot restore fine details. Metro shows notable blurring. Battlefield V shows corrupted detailing and softness. It's interesting that the two are different in response though and that shows the AI approach requiring training. Maybe some day it'll get good?
 
If we were to approach 'worse' empirically, we'd measure %age deviantion from ground truth. A quick eyeballing shows DLSS is fairing far worse there. It's losing all the high-frequency detail. It would appear the AI can make sense of what information is needed, but cannot restore fine details. Metro shows notable blurring. Battlefield V shows corrupted detailing and softness. It's interesting that the two are different in response though and that shows the AI approach requiring training. Maybe some day it'll get good?

The better it gets, the more training it requires, and the more hardware time it costs once running, as the larger the neural net gets.

Neural net image reconstruction for games has interesting applications. The idea of reducing motion blur artifacts is neat, if perhaps not the most noticeable thing in the world.

Denoising raytracing is also interesting, but there temporal reconstruction might also win. More interesting is temporal reconstruction + deep learning, for both primary image and raytracing. But the error in the image might compound too much to create weird artefacts, just like the variable rate shading does (see the Wolfenstein examples for weird crawling textures).

Ultimately deep learning could be very neat for video games, but perhaps for other areas that aren't image reconstruction like DLSS. Controlling and generating animation in realtime is a great application, as building animation systems today is an absolute pain, especially getting them to work right under multiple scenarios. That's one of the biggest runtime things devs are actually interested in for using Deep Learning today. There's also, of course, using deep learning for, you know, game AI. But that can be hard to get to "do the thing you want" today, maybe some day though.
 
If we were to approach 'worse' empirically, we'd measure %age deviantion from ground truth. A quick eyeballing shows DLSS is fairing far worse there. It's losing all the high-frequency detail. It would appear the AI can make sense of what information is needed, but cannot restore fine details. Metro shows notable blurring. Battlefield V shows corrupted detailing and softness. It's interesting that the two are different in response though and that shows the AI approach requiring training. Maybe some day it'll get good?
Yes, it certainly can be much improved. The question is how it's been trained and how it's being integrated into the pipeline. Some fairly big unknowns, one of the issues of having a black box ML solution. The other issue is figuring out where it's doing well and where it's doing poorly, we've certainly identified where it's doing poorly. But we haven't looked around to see where it's doing great (and I'm sure there will be some areas that will be, that's just intrinsic behaviour to ML applications). And how to approach that could be a number of ways which may not be covered in that black box solution.
 
Metro will get a day one patch which will improve DLSS over the review code:
  • Tuned DLSS sharpness to improve image quality
  • Updated learned data for DLSS to improve image quality with DLSS on
https://www.metrothegame.com/news/patch-notes-summary/

I'm still suprised about the bad quality in Metro and Battlefield. The implementation in both games is far away from FF15 which looks ultra sharp and crisp with DLSS...
 
Metro will get a day one patch which will improve DLSS over the review code:

https://www.metrothegame.com/news/patch-notes-summary/

I'm still suprised about the bad quality in Metro and Battlefield. The implementation in both games is far away from FF15 which looks ultra sharp and crisp with DLSS...

Big surprise, a game with a quasi infinite variety of viewpoints and a variety of different maps, is harder to hallucinate sharply than a demo on rails.
(and FF15 had plenty of undersampling artefacts, as discussed before)
DLSS is not an algorithm you make once and has deterministic outcome, it's highly data dependent, and the more data it needs to predict the worse it get's on average.
 
Back
Top