Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

This ^^
The nay sayers are criticizing DLSS when they totally forget to mention that it runs in circles around other solutions that produce far worst result. The truth is that all RTX owners choose DLSS over TAA any day, any time on any game. That's a hard fact and that's why DLSS is such a big deal. It provides better quality than any other available solution AND it boost performance by 30-60%. What not to like ?

EDIT: DLSS 3.0 is coming for Ampere launch (or very soon after), I can confirm that
How may I ask, can you confirm that?
 
EDIT: DLSS 3.0 is coming for Ampere launch (or very soon after), I can confirm that
I don't think Turing or Volta hardware can take advantage of any Ampere tensor core advances (INT8 Tensor Core operations with sparsity), so is highly plausible to expect DLSS 3.0 with feature improvements.
 
I don't think Turing or Volta hardware can take advantage of any Ampere tensor core advances (INT8 Tensor Core operations with sparsity), so is highly plausible to expect DLSS 3.0 with feature improvements.
DLSS uses FP16 only AFAIK and the only change in Ampere here is more performance. And since DLSS is pure compute whatever Ampere can run anything else can run too, the only question is at what speed.
 
DLSS uses FP16 only AFAIK and the only change in Ampere here is more performance. And since DLSS is pure compute whatever Ampere can run anything else can run too, the only question is at what speed.
Just found the link containing DLSS using INT8. I assume the source is either Nvidia or DS developers.
Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD

Edit: Clarity.
https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off
 
Last edited:
Strange that there's a debate between the use of FP16 and INT8 for DLSS. Maybe DLSS originally looked like arse because it was INT8?
 
I'm pretty sure that this is wrong. Most modern games use FP16 frame buffers, you can't process them in INT8 without noticeable quality loss.

Hopefully someone knowledgeable will chime in, but until then I'll try my best as a layman. iirc what is computed on AIs are the weights of the neural nodes whose "additions" lead to the final solution, not (in this case) the image itself.
 
I'm pretty sure that this is wrong. Most modern games use FP16 frame buffers, you can't process them in INT8 without noticeable quality loss.
int8 or int4 is referral to the size of the weights of the neural nodes. Depending on what we are trying to accomplish within the task at hand, scientists will choose anywhere from int4 to FP64
By choosing smaller node sizes we reduce the size of the model as well as increase processing times (should hardware allow it).

There are cases that more precision will matter, and sometimes be more performant than integer, but in the game of real time upscaling, every millisecond counts, there will be some precision lost (thus possibly resulting in artifacts) and design to speed the neural network up.

Essentially the largest factor is if you're making a custom neural network and your network exceeds arithmetic boundary causing overflow, you're network becomes broken (something that is positive becomes negative for instance) and that's why we keep increasing the value of precision if we require it. You could find a way to scale the weights back between -128 to 128, but eventually those added steps may be significant enough that it would have been just faster to run FP16 weights.

So you really need to know what you're doing if you're making int8 NN. For those of us who just need an accurate result and speed is not a concern, we stick with FP sometimes FP32 because Pascal consumer hardware is completely borked on FP16, I don't need to think about my weight ranges, I know they'll be covered. Once you are in the business of performance and you're building custom NN for performance, int4 and int8 I suspect is often an aim.
 
Last edited:
The sparsity acceleration works on any of the tensor formats anyway so isn't it irrelevant what format DLSS uses in regards to any possible performance increase in ampere?
 
Agreed, it's hilarious that people are *still* doing quality comparisons between techniques that are 1.5-2x apart in performance. That's just rubbish science. You need to normalize one of the two (perf/quality), and since quality is subjective you need to do an iso-performance comparison by dropping resolution and/or other quality options for TAA/native/whatever.
 
F1 2020 Adds NVIDIA DLSS For Increased Performance
August 24, 2020
Virtual racing has never been more popular, or as close to the real thing as it is now. Today, Codemasters’ F1® 2020 gets an AI performance boost with NVIDIA DLSS, which enables gamers to maximize their graphics settings as well as play at even higher resolutions using NVIDIA GeForce RTX GPUs.

“We wanted F1 2020 to be the most authentic and immersive F1 game to date,” said Lee Mather, F1 Franchise Game Director at Codemasters. “This required a laser focus on all aspects, from the My Team feature through to every pixel on the screen. NVIDIA DLSS gives users the performance headroom to maximise visual settings, resulting in realistic, immersive graphics.”
index.php
index.php

index.php


https://www.nvidia.com/en-us/geforce/news/f1-2020-nvidia-dlss-update/
 
interesting how 4K is under 3x the performance of 1080p despite being 4x the number of pixels. With DLSS that cuts to 2x.

But that only applies to the wider GPUs. So they're getting much more mileage out of the additional SMs at higher resolutions
CPU limitations?
 
Do we know what is the workload to implement dlss to a game ? How much is on nVidia, and how much is on the devs ?
 
Back
Top