Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

New How could DLSS even work effectively in multiplayer games if it's already failing in several areas in repeatable benchmarks? Could it be like a variable quality thing depending on what you are doing?
I think I would rather wait for DF comparisons, they use non compressed images. I don't trust Youtube's compression. Especially with two side by side videos.
 
It's possible the developer did not provide enough AI samples in the time given for DLSS to formulate a complete picture. If samples are not provided then it must "guess-it-mate" based on what it has stored. Under normal circumstances this knowledge should accumulate over time.
Huh? No.
DLSS support means NVIDIA has crunched the 64xSSAA renders and the information is included with the drivers. Only way it could improve over time was if the dev and NVIDIA would crunch more and more iterations and then deliver the updated information in new drivers, but this is a pre-baked benchmark, there shouldn't be any possible need for that
 
Huh? No.
DLSS support means NVIDIA has crunched the 64xSSAA renders and the information is included with the drivers. Only way it could improve over time was if the dev and NVIDIA would crunch more and more iterations and then deliver the updated information in new drivers, but this is a pre-baked benchmark, there shouldn't be any possible need for that

Exactly, benchmarks with minimal variation between runs are the best case scenario for DLSS.
 
I think I would rather wait for DF comparisons, they use non compressed images. I don't trust Youtube's compression. Especially with two side by side videos.

This video is running with vp9 compression in the 2160p60 preset:

vp9msipz.png


That's perfectly fine for comparisons until we get a better source.
 
Also, perhaps more damning, is the fact that apparently DLSS breaks the depth of field blur. Notice how sharp those rocks and bushes, trees etc are in the background on DLSS but blurry on TAA? They're also blurry without any AA (tested it with the demo TAA vs no AA myself) so DLSS apparently either breaks the effect completely or just guesstimates it should be sharp when the developer actually didn't mean it to be sharp.
Does the demo have a no AA option? The info I read is that settings are locked on highest in the demo, you can only change TAA and DLSS.
This video is running with vp9 compression in the 2160p60 preset:
Not perfect enough IMO.
 
Does the demo have a no AA option? The info I read is that settings are locked on highest in the demo, you can only change TAA and DLSS.
The public version of the benchmark does have None, TAA and FXAA, the DLSS-comparison version might have different options but the demo looks to be identical otherwise
upload_2018-9-19_23-52-48.png
 
However, since the training data is 64x supersampling in this case, the 2K frame with upscaling at 4K may look better than the natively calculated 4K frame. Training is a crucial factor that ultimately contributes to the best possible result. The more training, the better the algorithm.
...
A comparison is of course still difficult. We've set the bitrate for the recording to the maximum, but YouTube provides the videos at a lower bitrate. From personal experience we can say that the presentation between TAA and DLSS hardly differs. Final Fantasy XV shows a flickering of the grass when the DLSS is active, but the TAA filter often blurs a few details. Unfortunately, the benchmark of FFXV does not always run the same and therefore the timecode drifts further and further apart.

We would like to emphasize once again that only DLSS 1x is currently active. This is the variant that renders with a lower resolution and then performs an upscaling.

https://www.hardwareluxx.de/index.p...tx-2080-founders-edition-im-test.html?start=7
 
Has any reviewer mentioned DLSS reduces IQ at all in their reviews when using the cards and monitor?
Assuming you mean image quality yes, it has been criticized that it reduces image quality at least in some portions regardless if the reviewer think it's worth it or if it's overall better.
ComputerBase is quite rough especially considering the Infiltrator-demo and Tom's mentioned shimmering in FFXV in certain spots
 
The public version of the benchmark does have None, TAA and FXAA, the DLSS-comparison version might have different options but the demo looks to be identical otherwise
Then this is not a valid comparison. Your point will only become valid once you compare no AA on this version of the demo.
 
Then this is not a valid comparison. Your point will only become valid once you compare no AA on this version of the demo.
True enough, but since apparently such version of the demo doesn't exist, it can't be had. I would still argue that it's extremely unlikely that a new build would change the depth of field effect radically yet end up looking same on TAA.
 
Assuming you mean image quality yes, it has been criticized that it reduces image quality at least in some portions regardless if the reviewer think it's worth it or if it's overall better.
ComputerBase is quite rough especially considering the Infiltrator-demo and Tom's mentioned shimmering in FFXV in certain spots
At this point I think all we can do is wait for games to come out. In the same review Tom's also mentioned ghosting artifacts in TAA which are avoided with DLSS.
 
At this point I think all we can do is wait for games to come out. In the same review Tom's also mentioned ghosting artifacts in TAA which are avoided with DLSS.

DLSS is going to have artifacts, it's kind of worse than TAA in a fundamental way really. Can't wait to see how much popping previously subpixel GEO with a big color delta vs surrounding pixels is going to cause. Not to mention things like alpha and reflections. In short it's just more Nvidia PR overhype, deep learning AA will probably be great one day when combined properly with TAA and used in a smart manner, but Nvidia's implementation isn't that. The terrifying "ghosting" is just an artifact of older TAA that's been largely taken care of in newer, better TAA implementations.

Regardless, good job on Anandtech as usual. The reviews show up the biggest problem RTX has, in that it's not designed for the fact that transistors haven't gotten cheaper for years now. The design uses up way too much silicon that can potentially just sit there and be totally unused depending on the title, and even future titles with more raytracing and deep learning than today's have will still probably have silicon sit there and be unused at time. Considering the cost of that unused silicon is charged directly to consumers, it's probably a smarter play to have a GPU design with flexible compute units even at the cost of some efficiency, vs a ton of specialized units that just add to the cost without adding performance if not in use (at least in non mobile terms).
 
DLSS is going to have artifacts, it's kind of worse than TAA in a fundamental way really. Can't wait to see how much popping previously subpixel GEO with a big color delta vs surrounding pixels is going to cause. Not to mention things like alpha and reflections. In short it's just more Nvidia PR overhype, deep learning AA will probably be great one day when combined properly with TAA and used in a smart manner, but Nvidia's implementation isn't that. The terrifying "ghosting" is just an artifact of older TAA that's been largely taken care of in newer, better TAA implementations.
ModEdit: [Deleted Length of Saucy Spaghetti noise]

The true measure of how successful DLSS is the number of games that adopt it, and currently looks like that's already on the way to success. Time will tell but currently does not seem to stop people from buying RTX cards and the tech they bring.:smile2:
 
Last edited by a moderator:
If that's really what it is, there's no way of a valid comparison against anything else, it's just a nice option to have for people who don't care.
I was hoping they'd be using lower-res images only for the anti-aliasing portion of DLSS, not for the normal rendering process.

I understand with DLSS 2x, they are using native resolution?

There is a whole bunch of deceptive naming going on IMHO.

DLSS is basically rendering at a lower resolution than target resolution
"DLSS allows faster rendering at a lower input sample count, and then infers a result that at target resolution is similar quality to the TAA result, but with roughly half the shading work." (quote from white paper)
That implies that very likely DLSS uses something like checkboard rendering, rendering only half of the output pixels.

As for DLSS 2x, this would be the more honest comparison relative to TAA, as here there is no undersampling:
"We provide a second mode, called DLSS 2X. In this case, DLSS input is rendered at the final target resolution"
 
Nonsense. Traditional post AA works by lowering effective resolution (significantly reducing entropy) [blurring the image].

DLSS uses deep learning to reconstruct additional entropy from a lower resolution to start with.

In reality, it's superior in every important way.

The only parts of a scene that are "higher" resolution in a traditional post AA compared to DLSS is the areas the post AA doesn't work properly on, and therefore wasn't properly "blurred" in the first place ( and this shows in aliasing and other obvious artifacts )
 
Nonsense. Traditional post AA works by lowering effective resolution (significantly reducing entropy) [blurring the image].

DLSS uses deep learning to reconstruct additional entropy from a lower resolution to start with.

In reality, it's superior in every important way.

The only parts of a scene that are "higher" resolution in a traditional post AA compared to DLSS is the areas the post AA doesn't work properly on, and therefore wasn't properly "blurred" in the first place ( and this shows in aliasing and other obvious artifacts )

You probably missed the DLSS/TAA screenshot comparison post in this thread, it's not difficult to spot the blur DLSS causes relative to TAA. Undersampling high frequency image data is never a good idea, Nyquist learnt us long time ago.
 
Last edited:
You probably missed the DLSS/TAA screenshot comparison post in this thread, it's not difficult to spot the blur DLSS causes relative to TAA. Undersampling high frequency image data is never a good idea, Nyquist learnt us long time ago.
Doesn't it seem the fence is more blurred with TAA?
Edit: Granted the grass is more blurred with DLSS in that screenshot, but it does not appear that way throughout the video.
 
Back
Top