Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

DLSS performance:

Infiltrator_average_fps.png
Final_Fantasy_XV_Benchmark_average_fps.png

https://techreport.com/review/34105/nvidia-geforce-rtx-2080-ti-graphics-card-reviewed/13
 
Digital Foundry's assessment of DLSS:

We'll be posting screenshot comparisons in due course, but myself and colleagues John Linneman and Alex Battaglia are agreed that in the Final Fantasy 15 demo at least, DLSS is not only providing these big improvements to performance, but it's delivering more detail and fewer artefacts than the game's standard TAA. It's not quite so clear-cut with Epic's Infiltrator demo - UE4 features one of the best temporal anti-aliasing solutions around - but the fact that it's so competitive is testament to DLSS's quality.

https://www.eurogamer.net/articles/digitalfoundry-2018-9-19-geforce-rtx-2080-2080-ti-review?page=2
 
So DLSS is essentially rendering at a lower resolution then using the tensor units to "imagine" a native res image based on a model trained offline at Nvidia? If it really works as well as reviewers claim, this is a killer use of die area.
 
So DLSS is essentially rendering at a lower resolution then using the tensor units to "imagine" a native res image based on a model trained offline at Nvidia? If it really works as well as reviewers claim, this is a killer use of die area.

Some reviewers already found shortcomings:
"In the clip below, we make two observations. First, Noct’s textured shirt is affected by banding/shimmering due to DLSS. In the TAA version, his chest does not exhibit the same effect. "
 
Since the Star Wars demo is using DLSS, does that mean it's running at 1/2 of 2560x1440? Does that reduce the ray-tracing load? Also does that mean the tensor units are handling the upscaling and de-noising?
 
So, is DLSS double-confirmed to be rendering at lower than selcted resolution? Did someone test it with DLSS vs. noAA and still got better results?
 
Yeah, someone ran it on the 1080Ti and 2080Ti: The 2080Ti was 6 to 7 times faster with 1440p DLSS.

https://vimeo.com/290465222

Also here:

SW_1440p_2.png


https://pclab.pl/art78828-20.html
(We know RTX is much faster at raytracing, I'm interested in the card/ technology, but plan waiting for a 7 nm card, as I expect a much higher performance delta in rasterization/raytraycing due to a huge expected increase in nr of transistors)
My questions were :
1) how they managed running that on a GTX ?
2) does the GTX use TAA or DLSS in that demo?
 
Last edited:
So, is DLSS double-confirmed to be rendering at lower than selcted resolution? Did someone test it with DLSS vs. noAA and still got better results?

This review has a bunch of image comparisons between DLSS and TAA. At least from the second set of images (the boot one), the textures on the ground looks to be lower resolution in DLSS than in TAA. So it could be rendered at a lower resolution.
 
This review has a bunch of image comparisons between DLSS and TAA. At least from the second set of images (the boot one), the textures on the ground looks to be lower resolution in DLSS than in TAA. So it could be rendered at a lower resolution.
It has to be, how else would you get a 50% boost over native :/

Digital Foundry article says "At the most fundamental level, a lower resolution image is generated, and then a deep learning algorithm programmatically upscales the image based on 'learnt behaviour.'"

Kinda sneaky they are calling it supersampling.
 
If that's really what it is, there's no way of a valid comparison against anything else, it's just a nice option to have for people who don't care.
I was hoping they'd be using lower-res images only for the anti-aliasing portion of DLSS, not for the normal rendering process.

I understand with DLSS 2x, they are using native resolution?
 
Kinda disappointing that DLSS is included in the ray-tracing comparison.
Yes, and they must have hacked the driver. For at least the 411.63 WHQL does not support DXR on things not Turing or Volta. But mabe review drivers were different.
 

From ComputerBases test, I haven't watched the whole video yet, but immediately there's a stark difference where DLSS's lower resolution comes apparent - all the texts (like license plate) are terrible compared to native UHD + TAA.
Also, perhaps more damning, is the fact that apparently DLSS breaks the depth of field blur. Notice how sharp those rocks and bushes, trees etc are in the background on DLSS but blurry on TAA? They're also blurry without any AA (tested it with the demo TAA vs no AA myself) so DLSS apparently either breaks the effect completely or just guesstimates it should be sharp when the developer actually didn't mean it to be sharp.
 
I think DLSS looks pretty good in that video. It definitely looks sharper at 4:52 where he's driving a stake into the rock. The rope has much more definition with DLSS.
 
I think DLSS looks pretty good in that video. It definitely looks sharper at 4:52 where he's driving a stake into the rock. The rope has much more definition with DLSS.
It looks good in many places, but terrible in others. And the fact that it completely loses DoF-effect in the very beginning begs the question how many of the spots where it has "more definition" etc is due same effect - artists intentions getting lost in translation(upscaling)
 
How could DLSS even work effectively in multiplayer games if it's already failing in several areas in repeatable benchmarks? Could it be like a variable quality thing depending on what you are doing?
 
It's possible the developer did not provide enough AI samples in the time given for DLSS to formulate a complete picture. If samples are not provided then it must "guess-it-mate" based on what it has stored. Under normal circumstances this knowledge should accumulate over time.
 
Back
Top