Excuse me for asking, but I couldn’t help but notice a detail in 3kliksphilip’s video: “Wolfenstein Young Blood – DLSS Analysis”. (I can’t post links with under 10 posts, so I will give titles and timestamps instead).
At 4:27 in the video, he can observe that DLSS removes sparks/embers that were otherwise visible without it. If you look closely (there and in other videos of people running the same benchmark with DLSS) you can see that the sparks become blurry and leave ghosting trails before suddenly disappearing. I thought it may be some weird 720p interaction, but higher resolutions also appear to have the same problem (but at 4k most sparks appear to be visible).
On the 30th of August, NVIDIA writes the post “NVIDIA DLSS: Control and Beyond”, where they talk about the image processing algorithm, that doesn’t really utilize the tensor cores, that they implemented in Control instead of the AI version of DLSS that they used earlier, the reason being that they have “work to do to optimize the model's performance before bringing it to a shipping game”. They also explained that their AI research model was superior in every single way apart from performance: “The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.”
They do show an example of how much better it is by using a video with lots of embers/sparks. “Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.”
They finish with that they want to run the image processing algorithm, and have the AI algorithm clean up what it missed, or maybe that’s just how I’m interpreting it. “With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.” And then they finish by saying that they have 110 tensor TFLOPs and therefore they shall optimize the AI research model to make it run faster.
Back to wolfenstein. Now I’m no deep learning expert in any way, but the embers/sparks do have all of the characteristics of the image processing algorithm. So it appears that they do still use that algorithm, or some improved version of it.
So now I have a few questions: Is there anything which proves that they’re using the tensors cores (in wolfenstein)? And if they are using image processing algorithm to upscale, and then use the AI research model to fill in the details, has it now become some sort of AI “denoiser” rather than an upscaler?