Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Here's an image posted by Hilbert @ Guru3D and asks people to compare.
comparison_ff_guru3d.png
 
Last edited:
Here's an image posted by Hilbert @ Guru3D and asks people to compare.
comparison_ff_guru3d.png
Obviously an image without motion...
With checkboard rendering (DLSS possibly uses), with alternating black/white, for 2 frames, it's easy to recover the full resolution.
Images in motion are a very different story.
BTW the bottom image has noticeable more aliasing in the rear car window at the top, from the framerate this seems to be the DLSS one.
 
Last edited:
Yes, apparently one of the few artifacts. In the TAA image there is more blurring in the bushes on the cliff, lettering in sign, the sign frame, wheel rims look less defined.

Edit: Hilbert's point is no AA technique is perfect. There are trade-offs and each person has to decide which is best based on their gaming needs.
 
Last edited:
Yes, apparently one of the few artifacts. In the TAA image there is more blurring in the bushes on the cliff, lettering in sign, the sign frame, wheel rims look less defined.

Edit: Hilbert's point is no AA technique is perfect. There are trade-offs and each person has to decide which is best based on their gaming needs.

Note: This assumes that the public version and DLSS-version of the FFXV are supposed to look the same

In the public version the bushes, rocks etc are blurred even without AA which suggests that they're supposed to be blurred (DoF) and DLSS either completely breaks it or guesses the amount of blurring completely wrong.

--
Also, DigitalFoundrys DLSS-video is up
 
Note: This assumes that the public version and DLSS-version of the FFXV are supposed to look the same

In the public version the bushes, rocks etc are blurred even without AA which suggests that they're supposed to be blurred (DoF) and DLSS either completely breaks it or guesses the amount of blurring completely wrong.

--
Also, DigitalFoundrys DLSS-video is up

Their tests also reveal 4K is rendered at 1440p, so no checkboarding, still rendering half the pixels.
Instead of supersampling, it's actually subsampling, which first makes aliasing worse.
The DLSS does upsampling again, trying to fill in the missing pixels.
The really bad cases prone to aliasing like long near vertical or near horizontal edges it fails to get right, and aliasing actually get's worse.
Other high frequency image data gets lots during the subsampling and becomes a blur during reconstruction.
Some lower frequency image details it can sharpen to the point it looks ok.
Knowing what it does I'd rather switch screen resolution from 4K to 1440p, or buy a card that can really do 4K :)
 
So DLSS handles transparency much better, is overall sharper, and avoids Ghosting artifacts. It's also far better than checker-boarding ..
But it's also sometimes sharper when it shouldn't be sharper (still assuming they didn't completely change the DoF in DLSS-build of the benchmark since it looks the same on TAA on public and DLSS builds.)
 
So DLSS handles transparency much better, is overall sharper, and avoids Ghosting artifacts. It's also far better than checker-boarding ..

I agree but one of the problems is that it only works with UHD. Other resolutions could be added later but the AI has to be retrained for that specific resolution. The Star Wars Raytracing demo is an exception with DLSS at 1440p.
 
I'm reminded of two techniques Sony uses for some of their hardware. "Mastered in 4K" 1080p Blu ray disks include some data (metadata?) that helps Sony Blu ray players better upscale the image to 4K sets. Sony engineers would scan the 4K master and by applying the formula they came up with they could derive a set of data that eliminated the guesswork for how to upscale the 1080p Blu rays that were getting made using that master. IIRC reviewers were reasonably impressed by the results.

The other technique really reminds me of the library that DLSS builds up of a game. My Sony TV includes a library of data that suggests how low resolution images should get upconverted. Newer/more expensive sets get bigger and better libraries that Sony keeps improving on.

This makes sense to me, as nVidia/gaming in general is making further incursions into emulating the world of cinematography, so I'm not surprised if there's some seeming overlap.
 
Last edited:
Convolutional networks, unlike fully connected ones, can be typically run at arbitrary resolutions and don't necessarily need re-training.

As for what supersampling means, for instance TAA is a temporally amortized supersampling technique (see Brian Karis' original presentation about UE4 TAA), even if/when we render 1 sample per pixel or fewer.

In a game engine samples can be shaded, re-used (spatially and/or temporally) and even hallucinated (e.g. MLAA, FXAA). What really matters for good image quality is that a high number of (possibly good :eek:) samples is effectively integrated per pixel. This is what our brain does all the time, every single waking moment of our lives.
 
Other resolutions could be added later but the AI has to be retrained for that specific resolution.
Are you 100% sure about this?

It makes it a bit useless to anyone without a 4K monitor/TV. Unless it works with downsampling, I guess.
 
Are you 100% sure about this?

It makes it a bit useless to anyone without a 4K monitor/TV. Unless it works with downsampling, I guess.
But would for example 1440p DLSS'd to 4K downsampled to 1080p or 1440p even be better than native 1440p or 1440p scaled to 1080p? Especially considering the chance of artifacts and whatnot
 
But would for example 1440p DLSS'd to 4K downsampled to 1080p or 1440p even be better than native 1440p or 1440p scaled to 1080p? Especially considering the chance of artifacts and whatnot

At the very least, 1440p DLSS'd to 4K then downsampled to 1080/1440p should behave as a form of antialiasing that's noticeably cheaper than regular supersampling.

I see it needs training on a game-by-game basis.
How likely is it for nvidia to put the training data behind the Geforce Experience subscription, like they already do with the regular automatic game settings profiles, video recording, game streaming, game-ready drivers, etc.?
 
Last edited by a moderator:
At the very least, 1440p DLSS'd to 4K then downsampled to 1080/1440p should behave as a form of antialiasing that's noticeably cheaper than regular supersampling.

I see it needs training on a game-by-game basis.
How likely is it for nvidia to put the training data behind the Geforce Experience subscription, like they already do with the regular automatic game settings profiles, video recording, game streaming, game-ready drivers, etc.?
They actually backed down on the game ready drivers part, they're available just as quickly and as often from their website
 
How likely is it for nvidia to put the training data behind the Geforce Experience subscription, like they already do with the regular automatic game settings profiles, video recording, game streaming, game-ready drivers, etc.?
The quickest method to distribute any training data or updates would be to store that data on the cloud. I imagine having a Geforce Experience ID would facilitate access to that data.

Edit: In addition, ongoing cloud training and receiving updates if required would be definitely be an effective and efficient method of fine-tuning the algorithm on a game by game basis.
 
Last edited:
This is not RT but I think it will make RT more palatable sooner.


Nvidia DLSS which appears to be similar to items sebbbi has linked to before for image upres and then supersampling.

This seems to bring near 4k results with 1440p shading which makes RT and high resolution seem more plausible as shown by the starwars demo.

If it "doubles" the resolution but only uses 10% (total guess) of the silicon space then they seems a very good use of die space for a console.
 
DLSS is just another reconstruction technique, like checkboarding. Insomniac already has a stellar reconstruction technique, though we don't know the specifics of it, running on a 1.8 TF PS4 in compute, not even relying on PS4Pro's ID buffer which can improve things. DLSS isn't bringing anything new to the possibilities on offer for lower-than-native ray-tracing.
 
DLSS is just another reconstruction technique, like checkboarding. Insomniac already has a stellar reconstruction technique, though we don't know the specifics of it, running on a 1.8 TF PS4 in compute, not even relying on PS4Pro's ID buffer which can improve things. DLSS isn't bringing anything new to the possibilities on offer for lower-than-native ray-tracing.

So basically Nvidia are getting all the plaudits and slaps on the back for all these great new things that have been in real use on consoles/PC for a while then?
 
DLSS is a new way of reconstructing images that uses machine learning. It may be better than other algorithms. It may not. On PC, advanced upscaling isn't commonly used, and DLSS brings it, sort of, at the hardware level, except games have to be written to use it. I ask in the comments of that DF article for comparisons to other techniques, because at the moment it is being presented as something new rather than the something different that it is. Conceptually, AMD or whoever could stick something like Insomniac's temporal-injection in the drivers with suitable API plugins and have games upscaling on the GPU shaders just as well.

TBH I think the latest architectures are solutions looking for problems. I'm certain nVidia put the Tensor cores on them for its professional and automative markets, and is now looking for some application and justification in the gaming space.

But ultimately, kinda, nVidia are getting a lot more credit than they deserve. That's good PR for you. PowerVR had way more impressive raytracing years ago, and there's already fabulous upscaling running on far lower hardware.
 
So basically Nvidia are getting all the plaudits and slaps on the back for all these great new things that have been in real use on consoles/PC for a while then?
DLSS is a byproduct of the AI/ML based OptiX denoising for offline rendering and NVidia want's to "pimp" it Tensor core chops to differentiate its Turing GPUs and DLSS is the one of the many ways to hype the hell out of it's overpriced consumer RTX boards and justify the fact that they are not that faster than the Pascal GPUs at conventional rasterization (but this is is part because they are still stuck on the 12nm node).We have yet to see DLSS used in really gameplay situations because all they have show publicly so far are on-rails benchmarks (which are the base case scenarios for Machine Learning) and even in those bench it exhibit heavy blurry in some non stack situations... As shifty said they are already really great reconstruction techniques right about now that work on any low/mid/high-end GPUs.
 
Back
Top