Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

So basically Nvidia are getting all the plaudits and slaps on the back for all these great new things that have been in real use on consoles/PC for a while then?

Not exactly. This is the first iteration using inferencing for reconstruction in a consumer product at least. It's new.

I think you can think of rendering itself as a reconstruction technique. Fundamentally, you're just recreating an artificial image through algorithms. I'm a big fan of reconstruction/checkboarding as done on consoles. I think the image quality is excellent and it's not worth spending the resources on "native" resolution. DLSS I think can further improve quality and performance.

For me neural network approaches have an extremely high ceiling for image qulaity. Think about this: suppose you were going to be given an image that was rendered at 720p and told to upscale it in photoshop but you had to do the anti-aliasing by hand with a paint brush, you could make that image look better then the what a native render would be (and I type better looking and not equal to native). Why? because you know where the jaggies are, what ideal edges are, what textures should looklike, etc. It's all based on experience. That's what a neural network will do as well provided, its trained well enough and is big enough to handle all the varieties of images thrown at it. This might not happen in the first go at it with DLSS, but I think it will eventually.

The great thing about neural networks for inferencing is that they don't need a lot of precision so you can do your ops with byte, nibble, or even lower operations that the AI cores provide. You can save a lot of compute power and divert it else where.
 
Not exactly. This is the first iteration using inferencing for reconstruction in a consumer product at least. It's new.

I think you can think of rendering itself as a reconstruction technique. Fundamentally, you're just recreating an artificial image through algorithms.
Nah. Rendering is construction. You have the exact information available to generate the image. Reconstruction uses inbetweening for missing data.

I guess you have aspects of reconstruction like texture interpolation, but there's a clear distinction between rendering a pixel based on the exact polygons that occupy it and their exact shader values (or whatever other maths you use, like raytraced CSGs), versus rendering a pixel on inferred data from its neighbours.
 
The great thing about neural networks for inferencing is that they don't need a lot of precision so you can do your ops with byte, nibble, or even lower operations that the AI cores provide. You can save a lot of compute power and divert it else where.

This was where I was thinking it seems a good bet. You can do it on via regular compute but if ultimatly you leave a lot of performance on the table due to precision overhead under compute it could be a better use of die space. Console specifically where things are so controlled and singular hardware optimisation is the norm.

I think their pr numbers are something like 10x the normal compute flops for this lower precision operation. That's a lot more bang for your hardware sq mm, the caveat is as Shifty points out; if you use it (which for PC was possibly not so they needed this).

Edit
Sony had the ID buffer and went for checkerboarding in a big way, I don't see why something like this might not appeal to them. Again my thought was simply the on screen result to used die space was very attractive not that it's revolutionary etc.
 
DLSS is just another reconstruction technique, like checkboarding. Insomniac already has a stellar reconstruction technique, though we don't know the specifics of it, running on a 1.8 TF PS4 in compute, not even relying on PS4Pro's ID buffer which can improve things. DLSS isn't bringing anything new to the possibilities on offer for lower-than-native ray-tracing.
Comparisons with Insomniacs approach make Nvidia's look really expensive considering the amount of chip space the Tensor cores take up.
 
Comparisons with Insomniacs approach make Nvidia's look really expensive considering the amount of chip space the Tensor cores take up.
Has anyone done an in-depth analysis of R&C/Spiderman to reveal the weaknesses of Temporal Injection? All I hear off is complaint-free raving about the image quality. ;)
 
Has anyone done an in-depth analysis of R&C/Spiderman to reveal the weaknesses of Temporal Injection? All I hear off is complaint-free raving about the image quality. ;)
Perhaps quality isn't so much the issue for a comparison as it is the implementation, where Insomniac folks can work around the issues specifically per title vs a T800's 12-gauge autoloader approach? :p
 
How much of the die space do the Tensor cores occupy? Do we know this?

Not that I found, also given it seems this is value add and possibly not the real reason for their inclusion we have no idea how many of them are required to deliver the DLSS results we are seeing.
 
Not exactly. This is the first iteration using inferencing for reconstruction in a consumer product at least. It's new.

I think you can think of rendering itself as a reconstruction technique. Fundamentally, you're just recreating an artificial image through algorithms. I'm a big fan of reconstruction/checkboarding as done on consoles. I think the image quality is excellent and it's not worth spending the resources on "native" resolution. DLSS I think can further improve quality and performance.

For me neural network approaches have an extremely high ceiling for image quality. Think about this: suppose you were going to be given an image that was rendered at 720p and told to upscale it in photoshop but you had to do the anti-aliasing by hand with a paint brush, you could make that image look better then the what a native render would be (and I type better looking and not equal to native). Why? because you know where the jaggies are, what ideal edges are, what textures should looklike, etc. It's all based on experience. That's what a neural network will do as well provided, its trained well enough and is big enough to handle all the varieties of images thrown at it. This might not happen in the first go at it with DLSS, but I think it will eventually.

The great thing about neural networks for inferencing is that they don't need a lot of precision so you can do your ops with byte, nibble, or even lower operations that the AI cores provide. You can save a lot of compute power and divert it else where.

Unfortunately the dimensionality of reconstructing a 3d moving image from a neural net standpoint just explodes in terms of work. Fundamentally trying to reconstruct the entire image is just not a good use of neural nets, the possibility space of just taking into account three dimensions (which is to say not including shading reconstruction) in a temporally coherent manner is just way too vast for a neural net.

I'd love to see it used in conjunction with temporal AA to try and clean up artifacts. Fundamentally DLSS just can't produce the same quality as TAA, few people notice the tradeoff in time dimension over the usual exponential decay TAA uses, it's just not something people look for. But there are problems with things like noise, or missing information, or blur that neural nets could be great at fixing up.

As for DLSS itself, after trying to find actual quality comparisons it seems Nvidia itself let slip that it's not very good. The PSNR over simple bicubic upscaling, a rather bad upscaling algorithm, is just 1-2 points. For reference that's less than what MSAA 2x is able to offer, and far less than good temporal anti-aliasing or upscaling. For those that don't know PSNR is the scientific standard scale for perceptual differences in a given image. IE you measure your reference image (what you want your image to look like, say 4x supersampling for a game) then you run the PSNR difference between the ref image and your approximated image to see how well your approximation holds up.

Frankly at 1-2 points DLSS doesn't hold up very well, and to make it "better" you'd need exponential training time to produce a deployable net that would take up more power, and thus more heat and thus potentially less performance for the card to run. If a game has TAA or other clever upscaling, then DLSS is useless, as it doesn't appear to be able to run in concert with such. I guess it could be useful for games that don't have good upscaling or TAA though.
 
Last edited:
Unfortunately the dimensionality of reconstructing a 3d moving image from a neural net standpoint just explodes in terms of work. Fundamentally trying to reconstruct the entire image is just not a good use of neural nets, the possibility space of just taking into account three dimensions (which is to say not including shading reconstruction) in a temporally coherent manner is just way too vast for a neural net.

I'd love to see it used in conjunction with temporal AA to try and clean up artifacts. Fundamentally DLSS just can't produce the same quality as TAA, few people notice the tradeoff in time dimension over the usual exponential decay TAA uses, it's just not something people look for. But there are problems with things like noise, or missing information, or blur that neural nets could be great at fixing up.

As for DLSS itself, after trying to find actual quality comparisons it seems Nvidia itself let slip that it's not very good. The PSNR over simple bicubic upscaling, a rather bad upscaling algorithm, is just 1-2 points. For reference that's less than what MSAA 2x is able to offer, and far less than good temporal anti-aliasing or upscaling. For those that don't know PSNR is the scientific standard scale for perceptual differences in a given image. IE you measure your reference image (what you want your image to look like, say 4x supersampling for a game) then you run the PSNR difference between the ref image and your approximated image to see how well your approximation holds up.

Frankly at 1-2 points DLSS doesn't hold up very well, and to make it "better" you'd need exponential training time to produce a deployable net that would take up more power, and thus more heat and thus potentially less performance for the card to run. If a game has TAA or other clever upscaling, then DLSS is useless, as it doesn't appear to be able to run in concert with such. I guess it could be useful for games that don't have good upscaling or TAA though.

1-2 dB over bicubic is about AI super resolution, not DLSS. And given that its purpose is not really to recreate the original picture as it is to just create a higher resolution picture that looks better, it is not a surprise to me that PSNR isn't much higher. It probably over compensates on many ways, like making the upscaled image crisper than the original and such, lowering the PSNR, but looing good regardless.
 
Nvidia clarifies DLSS and how it works
Rick Napier, Senior Technical Product Manager at NVIDIA, told us that at its core, DLSS is a post-processing technique that improves performance over traditional anti-aliasing (AA) methods in two main ways. First of all, it simply takes less samples per pixel than current AA methods – meaning demand on the GPU is lessened. Secondly, Rick also emphasised the fact that DLSS is executed on the Tensor Cores within the Turing GPU, rather than CUDA cores. In Rick’s words, this is “critical” as it means DLSS is “freeing up the shaders to focus on rendering and not applying an AA technique.”

In sum, DLSS can increase game performance because traditional GPU shaders are not being leveraged for AA, while DLSS is also using less samples per pixel.

So that’s what DLSS does, but how does it work under the hood? In essence, DLSS uses a neural network that has been trained to take input frames (from a game) and output them with higher overall quality. It can do this because the neural network has been trained to optimise image quality. As Andrew Edelsten, Director of Developer Technologies at NVIDIA puts it, the “DLSS model is fed thousands of aliased input frames and its output is judged against the ‘perfect’ accumulated frames. This has the effect of teaching the model how to infer a 64 sample per pixel supersampled image from a 1 sample per pixel input frame.”

So when you’re gaming with DLSS on, it uses an encoder to extract “multidimensional features from each frame to determine what are edges and shapes and what should and should not be adjusted.” In other words, it knows what areas of your frame should and shouldn’t get the ‘DLSS treatment’ as it were. Once it knows that, the high-quality frames from the DLSS neural network can be combined to provide a final image.

https://www.kitguru.net/components/graphic-cards/dominic-moass/nvidia-clarifies-dlss-and-how-it-works/
 
Last edited by a moderator:
1-2 dB over bicubic is about AI super resolution, not DLSS. And given that its purpose is not really to recreate the original picture as it is to just create a higher resolution picture that looks better, it is not a surprise to me that PSNR isn't much higher. It probably over compensates on many ways, like making the upscaled image crisper than the original and such, lowering the PSNR, but looking good regardless.
Exactly. See http://arxiv-export-lb.library.cornell.edu/abs/1809.07517 for a state of the art discussion on that.
 
In sum, DLSS can increase game performance because traditional GPU shaders are not being leveraged for AA
I take this to support my view this isn't intended for the games' market and is nVidia looking for something to do with the silicon. That same die area given over to the Tensor cores could be given over to more CU, process the AI on the shaders, and have them available for other things too. If the die area were insignificant for the Tensor cores, nVidia would add that to their PR. So they built a die, had some silicon sat idle (put on for cars and raytracing for productivity), and said, "what are we going to us this for in gaming?" They found upscaling was their best option after seeing what other companies were doing with reconstruction and ML.
 
Last edited by a moderator:
Nothing in that^ link is based on industry anything. It just illustrates what shifty Geezer was saying, it's just marketing hype on what else they did to make use of that non-gaming die space. (4k AA..?)

Honestly, for games like Battlefield to get better, do they really NEED tensor cores (either from AMD, or nVidia).. ? Or does it need more hardware ROPs ?
ergo: Most players don't need a more visual Battlefield experience, they want a better one/(ie: 140 frames @ 4k)..! And with better physics and ballistics, more players, etc. Reflection in a water puddle(?) is for strokers, not legit Gamers.


So I do not know how well Nvidia can market their proprietary technology, when there are open standards such as Microsoft's DXR and the fact that a 4k monitor doesn't really need antialiasing, it needs frames per second. Ask most gamers, AA at 4k is not all that important. Panel speed, Color and dynamic range are more imnportant to their games, than AA. I do not think these chips were made specifically with gaming in mind. And think the public is going to react by waiting for AMD's 7nm cards coming in late 2018. For the die space, AMD's cards might have more of what gamers want at 4k.

Given the latest reviews, many within the public (and here) do not find the Turing's DLSS as all that inspiring for Gamers. It is just another form of AA, but does DLSS give us the best highest quality AA experience, or just another cheaper way of doing things?

Proprietary AA methods is not the way to move forward.
 
Given the latest reviews, many within the public (and here) do not find the Turing's DLSS as all that inspiring for Gamers. It is just another form of AA, but does DLSS give us the best highest quality AA experience, or just another cheaper way of doing things?
From what we've seen so far its a hit n' miss TAA-equivalent with very little rendering cost, which isn't very inspiring.
 
I realize that it appears that none of the current games in development which are implementing ray-tracing are using the tensor cores for denoising, but isn't that one of the primary functions of those cores? Considering the resolution and shader performance hit when using ray-tracing, does it not make sense to have those cores which are suited to not only denoise the ray-traced effects, but also high quality image reconstruction from lower base resolutions at the same? I'm sure the possibilities with the tensor cores and the neural networks they accelerate are only beginning to be explored. I'm sure Nvidia knows stuff we don't at this point.

As for DLSS specifically, It seems to me that within cases where the TAA implementation in a given game isn't of the highest quality (which is often), besides the obvious performance improvement... DLSS seems to show it's worth in visual quality alone. In FF15, the TAA solution was mediocre at best.. the hair was often stippled and jagged looking.. a problem with their implementation due to transparancies... such as the windows in the Regalia. They had an awful ghosting effect which is largely corrected with the DLSS implementation. On the other hand, the Infiltrator demo which is really post process heavy, and utilizes a high quality TAA solution still looks a bit better. The DLSS side had more shimmering and a bit of loss in pixel detail.. but still pretty impressive given the sometimes 30fps delta between the two.
 
Back
Top