Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Success!
I replaced Control's nvngx_dlss.dll file with Youngblood's file. Used 720p base, 1080p output resolution.

Youngblood's nvngx_dlss.dll
imgur a2n7KIu

Original nvngx_dlss.dll
imgur xTfIaed

It looks bad, low res. I don't think the algorithm is actually applied to the image at all.

Less than 10 posts, can't post links.

First of all, your 10th post is talking about having less than 10 posts. I just felt like pointing that out. :D

Second of all, well done! It does indeed use the tensor cores, but for short bursts (I think I can count 5 or 6?). But there’s a part before the first tensor burst that only uses a small amount of FP16 from ~10.95ms-~11.15ms. I think it looks a bit like the part from ~11.27ms-~11.88ms in the old picture. So I believe some parts of the image processing approach are still in play with less iterations, and they use AI to fill in the gaps. As stated earlier:

“With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.”

It would also explain why the issues with the embers look so familiar to those with the image processing approach. I’m not sure how they would appear on a purely AI based approach, but then again I’m no expert.

When it comes to image quality on the frame you have on the right side of the image, the only thing I think looks better is the letter ahead of you. In the original version it looks more like an X, while in the new one it looks more like a Y.

As for why it may look low res, I got a message from someone that attended a briefing with NVIDIA:

What we got to hear from NVIDIA at CES is that it’s not the same DLSS in Wolfenstein as in, for example BF V, but they have updated it quite a lot. However, the developers themselves have to choose to actively update DLSS in the game to gain access to these improvements, and it’s not all developers who are so eager on doing it for a game that they are already “finished with” (according to NVIDIA)

So if they only needed to replace a file that they received from NVIDIA, they would update it in 5 seconds and everything would be fine. There’s probably more work than that involved for them to not bother to do it.

You can embed media though?

Can you put links in code-tags? I seem to remember that working somewhere. ;)

Dorf's third post contains a full link... so I guess the warning is just there to scare away spam bots? Not that I've heard about polite spam bots that read warning messages. ¯\_(ツ)_/¯
 
So if they only needed to replace a file that they received from NVIDIA, they would update it in 5 seconds and everything would be fine. There’s probably more work than that involved for them to not bother to do it.
Metro Exodus is getting a new DLC next week. If they don't update DLSS to the current version, it will be quite telling of difficult it is to do so imo... and disappointing.

Dorf's third post contains a full link... so I guess the warning is just there to scare away spam bots? Not that I've heard about polite spam bots that read warning messages. ¯\_(ツ)_/¯
Oh yea whoops, had completely forgotten about that. Just followed your example with the imgur linking. :mrgreen:

In case people missed it, here's the video @Radolov mentioned earlier ( https://forum.beyond3d.com/threads/...g-discussion-spawn.60896/page-34#post-2102182 ):
 
So it seems like it might be possible to keep a library of nvngx_dlss.dll files per game to do comparisons as the game is updated.
 
So it seems like it might be possible to keep a library of nvngx_dlss.dll files per game to do comparisons as the game is updated.
Possibly, but the older versions would quite likely stop working as intended after updates. If they'd work at all.

For example Youngblood will crash when launching with Control's nvngx_dlss.dll (version 1.3.8.0). Also Youngblood's nvngx_dlss.dll (version 2.0.0.0) doesn't work with Star Wars (version 1.0.0.0), it runs but no image.

So if Metro (DLSS version 1.0.x) got updated to 2.0.0.0, it might just crash upon launch with the old .dll file.

I've been able to run Youngblood's .dll in Control and Control's in Star Wars (btw in this latter scenario peak tensor usage goes from ~70% to ~8% as expected also image quality is again poor).
 
Possibly, but the older versions would quite likely stop working as intended after updates. If they'd work at all.

For example Youngblood will crash when launching with Control's nvngx_dlss.dll (version 1.3.8.0). Also Youngblood's nvngx_dlss.dll (version 2.0.0.0) doesn't work with Star Wars (version 1.0.0.0), it runs but no image.

So if Metro (DLSS version 1.0.x) got updated to 2.0.0.0, it might just crash upon launch with the old .dll file.

I've been able to run Youngblood's .dll in Control and Control's in Star Wars (btw in this latter scenario peak tensor usage goes from ~70% to ~8% as expected also image quality is again poor).


I don't necessarily mean to run across different games. But say Metro comes out with dlss and then six months later they update it. It looks like it could be possible to swap the file to compare the two versions in Metro. Just as an example.
 
What Is AI Upscaling?
February 4, 2020
Traditional upscaling starts with a low-resolution image and tries to improve its visual quality at higher resolutions. AI upscaling takes a different approach: Given a low-resolution image, a deep learning model predicts a high-resolution image that would downscale to look like the original, low-resolution image.

To predict the upscaled images with high accuracy, a neural network model must be trained on countless images. The deployed AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper, hair looks scruffier and landscapes pop with striking clarity.
https://blogs.nvidia.com/blog/2020/02/03/what-is-ai-upscaling/
 
So this sounds like a service for devs where they submit their source and NV returns a dlss dll for that game. As they do not do all resolutions available then I would imagine they charge per resolution? Will they also charge for updates/patches? I would imagine so.
 
So this sounds like a service for devs where they submit their source and NV returns a dlss dll for that game. As they do not do all resolutions available then I would imagine they charge per resolution? Will they also charge for updates/patches? I would imagine so.
DLSS training is free. As far as patches/updates I would imagine it's treated probably like any other game patch.
 
nVidia don't charge at this point as it's a USP for selling hardware. If they did charge, the concept would probably die completely - no dev anywhere is going to pay to have an optional AA mode enabled on a tiny subset of their market.

Also, their PR on that page is classic marketing horse-shit.

But typically, media players use basic upscaling algorithms that are unable to significantly improve high-definition content for 4K TVs.

What Is Basic Upscaling?

Basic upscaling is the simplest way of stretching a lower resolution image onto a larger display. Pixels from the lower resolution image are copied and repeated to fill out all the pixels of the higher resolution display.

Filtering is applied to smooth the image and round out unwanted jagged edges that may become visible due to the stretching. The result is an image that fits on a 4K display, but can often appear muted or blurry.
No-one does this. Upscaling is via more complex algorithms giving far better results than pixel duplication or bilinear filtering. I wish companies believed in their products enough to not have to lie about the alternatives. Compare a decent quality algorithmic upscale to your AI upscale and see if it really is better.

The interesting bit in this blog though it that this is running on Shield TV, so not a Tensor-core device. I thought DLSS on RTX cards was explained as enabled by the Tensor cores (?). This shows machine-learnt solutions don't need anything but compute to implement and can be used on other cards if it proves a viable large-scale solution (which also makes you wonder what Tensor is actually enabling in their consumer cards?).
 
The interesting bit in this blog though it that this is running on Shield TV, so not a Tensor-core device. I thought DLSS on RTX cards was explained as enabled by the Tensor cores (?). This shows machine-learnt solutions don't need anything but compute to implement and can be used on other cards if it proves a viable large-scale solution (which also makes you wonder what Tensor is actually enabling in their consumer cards?).
Anything like this can be run on compute but it's seemingly faster on Tensor cores as long as the return trip cost of the Tensors is less than the cost of the overall cost via compute. The DLSS version in Control (?) was on compute but that newer version of their training algorithm was later used on Tensors in more recent titles. Tough luck for older versions though, that requires the devs going back to Nvidia and training all over again for each resolution and releasing a patch.
 
ML can run on CPUs and we do this a lot.
ML can run in GPUs and we do this a lot.
Tensor Cores support mainly deep learning algorithms and we use this a lot for image processing.

tensor cores are magnitude faster than GPUs, which are magnitude faster than CPUs.

upscaling video is much easier than upscaling a real time game.

Just to note.

One you can buffer and the other you cannot.
 
No-one does this. Upscaling is via more complex algorithms giving far better results than pixel duplication or bilinear filtering.
The blog post in question talks about video upscaling done in a typical video player.
The interesting bit in this blog though it that this is running on Shield TV, so not a Tensor-core device. I thought DLSS on RTX cards was explained as enabled by the Tensor cores (?).
Again, this comes back to the blog being specifically about upscaling videos not upscaling games.
 
ML can run on CPUs and we do this a lot.
ML can run in GPUs and we do this a lot.
Tensor Cores support mainly deep learning algorithms and we use this a lot for image processing.

tensor cores are magnitude faster than GPUs, which are magnitude faster than CPUs.

upscaling video is much easier than upscaling a real time game.

Just to note.

One you can buffer and the other you cannot.

Yes dlss does a great job offloading cpu/gpu, i can imagine something like the tensor cores would do great in next gen consoles, with 4k being the standard as opposed to 1080p. 4k is and will be rather taxing.
 
Yes dlss does a great job offloading cpu/gpu, i can imagine something like the tensor cores would do great in next gen consoles, with 4k being the standard as opposed to 1080p. 4k is and will be rather taxing.
It could also be significantly slower as well.

AI is just algorithm and there are different things that it is capable of. The challenge of AI isn't quality, quality for sure can outmatch any current upscaling algorithm.

These are entirely AI generated photos
https://www.thispersondoesnotexist.com

The challenge is real time while keeping that high level of fidelity. The more layers the neural network the longer the time it takes to complete it's job. More layers will definitely lead to better quality. But with real time graphics, you're in a serious constraint to bring the evaluation time down to a reasonable level so that games can still operate in the 16ms range.
 
The blog post in question talks about video upscaling done in a typical video player.
The context is about playing content on a TV box. I assume everyone feeds 1080p video to their 4K TV to upscaler with its sophisticated upscaler.

Again, this comes back to the blog being specifically about upscaling videos not upscaling games.
Fair point. Is there any data on core utilisation for DLSS? Can we see how computationally demanding it is through game profiling or something?
 
Back
Top