Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

since according to them DLSS can only be used in situations where it yields performance gains. And DLSS2X can never offer that.
That could be true of DLSS2X in it's current state, but with the improvements in DLSS I wouldn't be surprised to see similar enhancements in DLSS2X, or future additional modes. New concepts are never set in stone and expect the DLSS2X feature to evolve with the same commitment Nvidia pursued with DLSS.
 
That could be true of DLSS2X in it's current state, but with the improvements in DLSS I wouldn't be surprised to see similar enhancements in DLSS2X, or future additional modes. New concepts are never set in stone and expect the DLSS2X feature to evolve with the same commitment Nvidia pursued with DLSS.
Nothing can change the fact that DLSS2X will always lower performance, because it's rendered at native resolution, and DLSS regardless of version is not free.
 
Excuse me for asking, but I couldn’t help but notice a detail in 3kliksphilip’s video: “Wolfenstein Young Blood – DLSS Analysis”. (I can’t post links with under 10 posts, so I will give titles and timestamps instead).

At 4:27 in the video, he can observe that DLSS removes sparks/embers that were otherwise visible without it. If you look closely (there and in other videos of people running the same benchmark with DLSS) you can see that the sparks become blurry and leave ghosting trails before suddenly disappearing. I thought it may be some weird 720p interaction, but higher resolutions also appear to have the same problem (but at 4k most sparks appear to be visible).

On the 30th of August, NVIDIA writes the post “NVIDIA DLSS: Control and Beyond”, where they talk about the image processing algorithm, that doesn’t really utilize the tensor cores, that they implemented in Control instead of the AI version of DLSS that they used earlier, the reason being that they have “work to do to optimize the model's performance before bringing it to a shipping game”. They also explained that their AI research model was superior in every single way apart from performance: “The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.”

They do show an example of how much better it is by using a video with lots of embers/sparks. “Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.”

They finish with that they want to run the image processing algorithm, and have the AI algorithm clean up what it missed, or maybe that’s just how I’m interpreting it. “With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.” And then they finish by saying that they have 110 tensor TFLOPs and therefore they shall optimize the AI research model to make it run faster.

Back to wolfenstein. Now I’m no deep learning expert in any way, but the embers/sparks do have all of the characteristics of the image processing algorithm. So it appears that they do still use that algorithm, or some improved version of it.

So now I have a few questions: Is there anything which proves that they’re using the tensors cores (in wolfenstein)? And if they are using image processing algorithm to upscale, and then use the AI research model to fill in the details, has it now become some sort of AI “denoiser” rather than an upscaler?
 
NVIDIA RTX DLSS Check In – It Is Definitely Learning
February 2, 2020
Wolfenstein Youngblood was announced early on to have RTX support and even become part of the game bundle that NVIDIA was offering for quite some time. Other than the lackluster reviews for the game itself it took quite some time for RTX features to be implemented into the game, perhaps that was because it's not using DX12 but Vulkan and needed a bit more work for that API or it was delayed to make use of the newer DLSS SDK but either way the results were much better here than in the past.

One of the major criticisms that DLSS caught was when NVIDIA added Image Sharpening to their Freestyle Filters and later moved into a global setting for their NVIDIA Control Panel. Many people found that reducing your render resolution to around 80% and applying Image Sharpening could result in a near native image and give you a solid performance boost. This is something we will be looking at in the image and performance comparisons just below.
...
Do ray traced reflections kick performance pretty hard? Yeah, on the surface it does. But even at 1080p on the RTX 2060 FE we see DLSS claw back a ton of that performance ranging between a 30-50% performance benefit over not using it. Most surprising to me was the benefits over even using what people have been calling the preferred method of just reducing the render scale and pair with Image Sharpening. This method did increase the performance over native 1080p by around 10% and does look good enough, but not enough at all to convince me to not use DLSS in this title. If this is what we can expect out of DLSS going forward I feel the stories of its death were greatly exaggerated.
https://wccftech.com/nvidia-rtx-dlss-check-in-it-is-definitely-learning/
 
Is there anything which proves that they’re using the tensors cores (in wolfenstein)?
Without a profiler of some sort; no.
You can determine if they are running a neural network however; and that is because its execution time should be very consistent regardless of how much or little work it needs to do.
 
Nothing can change the fact that DLSS2X will always lower performance, because it's rendered at native resolution, and DLSS regardless of version is not free.

That's because you're not in marketing. If DLSS2X ever becomes a thing Nvidia will compare it to whatever they determine to be its IQ equivalent i.e. performance sucking SSAA.
 
That's because you're not in marketing. If DLSS2X ever becomes a thing Nvidia will compare it to whatever they determine to be its IQ equivalent i.e. performance sucking SSAA.
Of course, but what PR can market it as wasn't the point at all.
 
Without a profiler of some sort; no.
You can determine if they are running a neural network however; and that is because its execution time should be very consistent regardless of how much or little work it needs to do.

I found someone that had done such profiling once on the star wars DLSS demo, and one on control. This was in Nsight (for example, search youtube for "Nsight Graphics 2019.4" at 1:04 to see how it looked like).

Star Wars: SM FP16 + Tensor Pipe Throughput 65.5%
Add this at the end of imgur HTTji3j

Control: SM FP16 + Tensor Pipe Throughput 5.9%
imgur 1cvtPRv

This was only from the final upscaling pass. From the star wars demo, when they were definitely using deep learning, and then they reduced their tensor throughput by 11x to Control and started to call it an "image processing approach".

Now a request: For those who have an RTX card and a copy of Wolfenstein youngblood. Could you use nsight and capture a frame in Wolfenstein (preferably at the burning car in the benchmark) using DLSS and show the results, as well as the units throughput graph?
Unfortunately, I don't have a RTX card and I don't plan on getting one. If I had one I'd do it myself. I'll be happy no matter what the results look like.
 
I found someone that had done such profiling once on the star wars DLSS demo, and one on control. This was in Nsight (for example, search youtube for "Nsight Graphics 2019.4" at 1:04 to see how it looked like).

Star Wars: SM FP16 + Tensor Pipe Throughput 65.5%
Add this at the end of imgur HTTji3j

Control: SM FP16 + Tensor Pipe Throughput 5.9%
imgur 1cvtPRv

This was only from the final upscaling pass. From the star wars demo, when they were definitely using deep learning, and then they reduced their tensor throughput by 11x to Control and started to call it an "image processing approach".

Now a request: For those who have an RTX card and a copy of Wolfenstein youngblood. Could you use nsight and capture a frame in Wolfenstein (preferably at the burning car in the benchmark) using DLSS and show the results, as well as the units throughput graph?
Unfortunately, I don't have a RTX card and I don't plan on getting one. If I had one I'd do it myself. I'll be happy no matter what the results look like.
Hmm.. I really should grab an RTX card myself to be honest; considering the area I work in. Just too lazy to do it. Also waiting to see ampere before i pull the trigger.
Yup, Nsight would probably do it. I'm not sure how you can just run Nsight on a release build though. Usually these profilers are connected to a debug build. Hopefully someone else that has done this can offer some insight.

I'll have to read the documentation to learn more.

edit: yes you launch Nsight and it intercepts the API stack. So yes, its designed to look at release code.
 
Hmm.. I really should grab an RTX card myself to be honest; considering the area I work in. Just too lazy to do it. Also waiting to see ampere before i pull the trigger.
Yup, Nsight would probably do it. I'm not sure how you can just run Nsight on a release build though. Usually these profilers are connected to a debug build. Hopefully someone else that has done this can offer some insight.

I'll have to read the documentation to learn more.

edit: yes you launch Nsight and it intercepts the API stack. So yes, its designed to look at release code.

I always thought it wouldn't work with a release build ... may be taking a look at a couple of my games that are poor performers.
 
Now a request: For those who have an RTX card and a copy of Wolfenstein youngblood. Could you use nsight and capture a frame in Wolfenstein (preferably at the burning car in the benchmark) using DLSS and show the results, as well as the units throughput graph?
I tried this with Nsight 2020.1 and the latest Geforce driver 442.19 but GPU Trace wont attach to the game. "No data source is available - make sure the application is using a supported API". Same issue with Q2RTX (uses Vulkan also).

I was able to run Star Wars demo and Control though and saw pretty much the same results as seen in the ones you posted.
 
Success!

I replaced Control's nvngx_dlss.dll file with Youngblood's file. Used 720p base, 1080p output resolution.

Youngblood's nvngx_dlss.dll
imgur a2n7KIu

Original nvngx_dlss.dll
imgur xTfIaed
Hahaha awesome
 
Success!

I replaced Control's nvngx_dlss.dll file with Youngblood's file. Used 720p base, 1080p output resolution.

Youngblood's nvngx_dlss.dll
imgur a2n7KIu

Original nvngx_dlss.dll
imgur xTfIaed

Isn't the whole purpose of DLSS is that the training is game specific? Won't running one game's version on another game result in more errors?
 
Is there DLSS for Minecraft? If so, someone please run YoungBlood with Minecraft's dll.
 
Back
Top