Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

DLSS requires the engine to support motion vectors and separate UI rendering pipeline from the game resolution. So it's not a post processing solution at all.
Temporal AA also requires motion vectors and (if done proper) is applied before UI is composited to the 3D scene. Still a post-process anyway. It comes after rasterization and shading.
 
You're aware I'm using an upscaled image to enquire about alternative reconstruction methods, right? Vipa889 made some assertions that DLSS was better, more important for consoles, and had more room to grow. I tried to engage Vipa in a discussion to explain his thinking behind that - why is DLSS better than other consoles upscaled games? Given an example of an upscaled game, what are the issue of Insomniac's implementation that DLSS solves?

Because at present, it appears other upscaling systems as just as capable as DLSS and its unclear what, if any, advantages DLSS has over other reconstruction techniques.
One advantage on traditional methods is that they are easily tweakable, programmers can do small changes and see effect immediately.
For DLSS it apparently needs to be retrained when game has changed enough and training takes time.

Also there hasn't been single DLSS game which uses dynamic source resolution and is something that is quite easy to do with temporal upsampling/injection.
 
From io-tech user. According to him the performance increase isn't really that high in most places, shots taken in spectator mode.
1440p TAA
0IZhVSB.jpg

1440 DLSS:
AS0LJzJ.jpg

1440p TAA
ml2e54k.jpg

1440p DLSS
gytk4bK.jpg

Small details, as expected, are quite blurry mess compared to native + TAA. Also DLSS is only available with DXR enabled.
 
That's it its fair share of faults, for those claiming DLSS is superior to other methods. Not too bad, but some nasty noise in places, and a horizontal line across the bottom.

Image1.png

What's it like in motion?
 
From io-tech user. According to him the performance increase isn't really that high in most places, shots taken in spectator mode.
Small details, as expected, are quite blurry mess compared to native + TAA. Also DLSS is only available with DXR enabled.
Did the user mention what drivers they used? Geforce 418.91 WHQL are mandatory for DLSS to work with BFV.
 
Agreed on the call for motion. How it looks in motion is going to matter a lot. Temporal is going to look pretty close to native if the scene is static.
60 (DLSS) vs 39 (TAA)
71 (DLSS) vs 53 (TAA)

Is also imo, very significant. This is basically the difference between a console being locked 60 vs locked 30.
Biggest take away for me is to see DXR running in-step with DLSS and obtaining more performance. This is what I had predicted, glad to see that it's happening - I can only hope to see more performance improvement over time.

Getting DLSS/equivalent up to 4K is the next milestone I need to see.
 
Why the separation of resolutions based on the models?
And why the hell is RT required… as it's related at all..see FFXV and Metro:

Metro Exodus NVIDIA DLSS

  • DXR Ray tracing is NOT required to turn on DLSS and you can enable/disable DXR Ray Tracing and DLSS independently in some settings.
  • DLSS increases performance up to 30%.
  • BUGMetro Exodus Benchmark does not currently support DLSS
  • DLSS is a continuously improving feature as NVIDIA will continue to train the network and deploy new software updates.

Metro Exodus DLSS Support Matrix


Metro Exodus DLSS Support
1080p 1440p 4K
RTX 2060
DLSS with RT On DLSS with RT On No DLSS
RTX 2070 DLSS with RT On DLSS with RT On DLSS with RT On or Off
RTX 2080 No DLSS DLSS with RT On DLSS with RT On or Off
RTX 2080 Ti No DLSS DLSS with RT On DLSS with RT On or Off
 
I think that is a studio decision since FF XV had only DLSS. I could be wrong but also read in BFV they give no ability to turn off TAA, so DLSS is applied on top of TAA.
 
Agreed on the call for motion. How it looks in motion is going to matter a lot. Temporal is going to look pretty close to native if the scene is static.
60 (DLSS) vs 39 (TAA)
71 (DLSS) vs 53 (TAA)

Is also imo, very significant.
Sure, but in comparison to other upscaling, I'm not sure it's anything special. Considering how well HZD et al are upscaled on a paltry <2 TF GPUs, upscaling on large GPUs using more cycles could be notably better. Which is important because it's not dependent on a hardware feature but can be included in any game engine. Here's Intel exploring checkerboarding (August 2018). I'm struggling to see an argument in favour of DLSS as opposed to games handling upscaling themselves. It's not a simple drop-in solution, meaning DLSS is a platform-exclusive solution devs have to integrate. If they are going to integrate reconstruction, do some from the beginning in their own engine and have it work for every GPU from every manufacturer.
 
Sure, but in comparison to other upscaling, I'm not sure it's anything special. Considering how well HZD et al are upscaled on a paltry <2 TF GPUs, upscaling on large GPUs using more cycles could be notably better. Which is important because it's not dependent on a hardware feature but can be included in any game engine. Here's Intel exploring checkerboarding (August 2018). I'm struggling to see an argument in favour of DLSS as opposed to games handling upscaling themselves. It's not a simple drop-in solution, meaning DLSS is a platform-exclusive solution devs have to integrate. If they are going to integrate reconstruction, do some from the beginning in their own engine and have it work for every GPU from every manufacturer.
There are pros and cons to every type of solution. If it was so easy to do Spiderman's TAA or HZD TAA we'd see it in all games as well. It's not like Rockstar doesn't know how to implement checkerboard/TAA type solutions, but look at how that turned out for them. At this point in time there is no magic 1 solution that bests all it would appear, though I still think it's early days for ML type solutions.

in particular DLSS is locked to nvidia it's their specific variant of it. There will be open variants that will undergo similar advancement under DirectML which gives developers their chance to develop their own form of deep learning super sampling that could net better results customized to their game.

As for why not leverage more GPU cycles, once again, I guess the point is to put those resources elsewhere. Thus in the case of doing Ray Tracing you need all the available GPU you have, thus DLSS is acceptable solution as it's running 1080p upscaling to 1440p. Where 1440p TAA is 1440p.
 
There are pros and cons to every type of solution. If it was so easy to do Spiderman's TAA or HZD TAA we'd see it in all games as well. It's not like Rockstar doesn't know how to implement checkerboard/TAA type solutions, but look at how that turned out for them. At this point in time there is no magic 1 solution that bests all it would appear, though I still think it's early days for ML type solutions.
No-one said there was. The concern here is a solution that's tied to nVidia RTX GPUs doesn't sound particularly worth devs investing in versus working on their own upscaling systems. DXR raytracing methods that'll be platform agnostic is fine. DML based upscaling using their own solutions would be fine. But platform specific, especially with limited returns, doesn't seem a great investment to me.

For devs, it's an obvious dead-end. Why bother working on DLSS support for a tiny subset of the market, that'll never grow to be a major part of it? Use checkerboarding, and open ML solutions when they're possible. For gamers, DLSS was a selling point of the hardware but if it gains little use and is to be superseded by in-game reconstruction, it was a poor feature to be sold on. It's, again, a pointer to me of a solution looking for a problem. RTX cards were going to have AI cores for their AI markets. nVidia then went looking for a way to use them to sell the cards to gamers. They'd have been better off creating reconstruction methods in compute as part of their libraries (which you hint should be quite straight forward and undemanding to implement ML models on compute anyway).

Had DLSS literally been a simple post-effect applied on top off game, it'd make sense, but what we've seen so far doesn't justify it's marketing prominence in my mind.
 
No-one said there was. The concern here is a solution that's tied to nVidia RTX GPUs doesn't sound particularly worth devs investing in versus working on their own upscaling systems. DXR raytracing methods that'll be platform agnostic is fine. DML based upscaling using their own solutions would be fine. But platform specific, especially with limited returns, doesn't seem a great investment to me.

For devs, it's an obvious dead-end. Why bother working on DLSS support for a tiny subset of the market, that'll never grow to be a major part of it? Use checkerboarding, and open ML solutions when they're possible. For gamers, DLSS was a selling point of the hardware but if it gains little use and is to be superseded by in-game reconstruction, it was a poor feature to be sold on. It's, again, a pointer to me of a solution looking for a problem. RTX cards were going to have AI cores for their AI markets. nVidia then went looking for a way to use them to sell the cards to gamers. They'd have been better off creating reconstruction methods in compute as part of their libraries (which you hint should be quite straight forward and undemanding to implement ML models on compute anyway).

Had DLSS literally been a simple post-effect applied on top off game, it'd make sense, but what we've seen so far doesn't justify it's marketing prominence in my mind.
I agree, I use DLSS as a catch all term, nvidia shouldn't be able to trademark Deep Learning Super Sampling, it's not a name, it's an actual function that is happening. I do agree that DLSS will be a dead end though, when compared to DML solutions to arrive later in the future. However, nvidia supports DML, so it's not that they are out of the picture, just, they are running overhead to sell the solution (their cards) while the industry is working its way to supporting a platform agnostic solution.
 
DLSS makes the game faster only when the frame rate is low. If the frame rate is high, the cost of running the neural network, even with tensor cores, will dominate the rendering time, meaning you won’t see a performance improvement.
But I thought this was being sold as a quality improvement as well? All the marketing for the tech shows enhancements for details using DLSS instead of TAA. 2080ti users running high framerate 1440p displays aren't allowed to use it? Even though a "4k" DLSS is rendering at 1440p upscaling but since it's output at 4k it's not an option that's available?

Especially given the performance impact of Raytracing, I would have thought a 1440p display on a 2080ti with RTX and DLSS would be a great combination for maintaining high framerates.
 
Seems not many games aside from HZD and Spiderman that have this good of a reconstruction method? 20 titles total?
Because its a new tech. You can only iterate once per title, and every title takes a few years to make, so there's a long learning cycle. Guerilla's first attempt was Killzone Shadowfall, 2013. Second attempt was HZD, 2017, soa four year iteration on what they learnt. Like raytracing, not every game will have it, even for years to come. We can say the same of dynamic resolution, and a host of effects/techniques including future mesh shaders and compute-based rendering. However, being platform agnostic, reconstruction, whether performed via compute or machine-learning, is going to end up everywhere, including all nVidia's former GPUs. Research into reconstruction will be shareable and the art will improve across the whole industry's progress. Reconstruction will find its way into Unreal and Unity alongside Decima et al, and games using these engines will benefit just as they will whatever RT developments get included. Games made 2/3 years from now using reconstruction on compute will be playable with reconstruction and better framerates on 1080s etc. DLSS is going to remain incredibly niche to the point I think it'll die off for the reasons stated. I see no incentive for a dev to include it beyond nVidia doing it for them. If the performance was astronomically better, or it was a simple drop-in bonus feature for RTX cards, it'd have some justification, but that's not the case at all at the moment.

Perhaps some good will come of this with nVidia eventually releasing their DLSS research at a time when GPUs can use DirectML?
 
DLSS makes the game faster only when the frame rate is low. If the frame rate is high, the cost of running the neural network, even with tensor cores, will dominate the rendering time, meaning you won’t see a performance improvement.
Are there numbers on the costs of running the NN? PS4 Pro reconstruction seems to be about 2ms on a 3.6 TF console. Actually I can't find ms numbers for the upscale process. But anyway, PS4 Pro can upscale from half res to 4K in a few ms. The same upscale time will be lower on a faster GPU, so should be all of 1ms on a fast GPU. I'd want an AI solution to be as fast as that or significantly beneficial in some other way if slower than that.
 
Back
Top