Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

You mean RIS aka Radeon Image Sharpening, CAS is separate thing implemented by gamedevs, RIS something you can add yourself like NVIDIA Image Sharpening

They're the same thing. Radeon Image Sharpening is implemented at the driver level, so it'll sharpen the entire image. CAS is the same type of sharpening but implemented at the game level, so they can do things like sharpening the image before overlaying the HUD.
 
They're the same thing. Radeon Image Sharpening is implemented at the driver level, so it'll sharpen the entire image. CAS is the same type of sharpening but implemented at the game level, so they can do things like sharpening the image before overlaying the HUD.
Double checked and you're apparently right, I remembered RIS being simpler sharpening
 
I don't think it does anything particularly interesting for upscaling. It's just a trivial upscale method with sharpening applied. Nvidia offers gpu upscaling before sharping in their drivers, but they explicitly state some tvs/displays will have higher tap upscale filters than nvidia provides with their gpus.

Never said it does something particular, only that it does allow for upscaling while RIS should have not a similar option IIRC.
 
No TAA, and if you get significant Aliasing nowadays at 4K, it means a game programmed with the arse.

Well this assertion is surely nonsense.

I had the same question as Scott, what exactly are we calling native? Clearly there is no artifact free “perfect” baseline to which we’re comparing DLSS.
 
Well this assertion is surely nonsense.

I had the same question as Scott, what exactly are we calling native? Clearly there is no artifact free “perfect” baseline to which we’re comparing DLSS.

Not more nonsense that affirming that upscaling a downsample is better than a not-downsampled signal.
If someone wants to repeat the Death Stranding case, it is a game programmed with arse because it applies unhealthy levels temporal AA on everything blurring the image (but hey, it does not introduce moving artifacts like it happens with DLSS...)
In the case of CoD images i posted we see that that game does not apply unreasonable amount of IQ reducing TAA - the result is that you can perceive the IQ loss when applying DLSS as per the images posted. Which can be perfectly fine for many (most of?) people given the performance inrease, while others may find it differently.
Personally, I find it useful as I'm using a Mobile RTX2070 and Control is a very unoptimized game, even at 1080p.
 
Well this assertion is surely nonsense.

I had the same question as Scott, what exactly are we calling native? Clearly there is no artifact free “perfect” baseline to which we’re comparing DLSS.
64x SSAA? Isn't that what DLSS is trained on as the source of truth?
 
I would think it's because of lack of time. There has been ton of gpu's and cpu's released lately. I fully expect there to be a lot of comparisons once amd gets their newly promised super resolution solution integrated into games.
except now they are putting out videos of console games getting patched . So you have the game performance and then the patched performance and then the sponsered performance and so on.
 
Some reviews absolutely do say it can look better than native. Not in every game, but certainly in some. And in those that it's not, it's clearly close.

The only one they say it may look better than DLSS is Death Stranding. Which has the horrible issues of exaggerated TAA applied everywhere. Even FFX CAS looks better than that. The other titles, they cannot say. I.e. Control, which I have and play with, and I used DLSS and you can clearly see there is an IQ loss. Low, but perceivable. Watch Dogs: Legion and CoD are the most recent exampes of the same behavior. It's close? Yes, it's close. Never said otherwise.
 
Last edited:
Not more nonsense that affirming that upscaling a downsample is better than a not-downsampled signal.

That’s not what’s being compared. DLSS isn’t just upscaling. It’s also antialiasing. You seem to be arguing a theoretical point that’s not relevant to how modern games actually work.

In some games DLSS looks better than native and in some games it doesn’t. The reason for that is that native has its own share of artifacts.

Your position only makes sense if “native” represented ideal IQ. It doesn’t.
 
Yeah the problem here is that the "native" comparison tends to be against the native resolution with TAA, which is rather horrible to begin with. Unfortunately many games nowadays don't come with any other AA solutions and don't offer above-native internal rendering options.
 
That’s not what’s being compared. DLSS isn’t just upscaling. It’s also antialiasing. You seem to be arguing a theoretical point that’s not relevant to how modern games actually work.

In some games DLSS looks better than native and in some games it doesn’t. The reason for that is that native has its own share of artifacts.

Your position only makes sense if “native” represented ideal IQ. It doesn’t.

The only game where this can be said (DS) takes an image produced by the game engine and applies an exxagerated post-processing that lowers some parts of the IQ. If DLSS used the same "native" image at a lower resolution as input, it would result in worse IQ. This does not happen because in that case DLSS does not use the lower IQ "native", but it takes the not processed version in the pipeline. In practically all other games, and I gave plenty of examples about, the "native" IQ is not devastated by absurd postprocessing and thus DLSS image shows some quality loss, as it should be. Then take it as you want.
 
The only game where this can be said (DS) takes an image produced by the game engine and applies an exxagerated post-processing that lowers some parts of the IQ. If DLSS used the same "native" image at a lower resolution as input, it would result in worse IQ. This does not happen because in that case DLSS does not use the lower IQ "native", but it takes the not processed version in the pipeline. In practically all other games, and I gave plenty of examples about, the "native" IQ is not devastated by absurd postprocessing and thus DLSS image shows some quality loss, as it should be. Then take it as you want.

Isn’t Control also considered a showcase for DLSS IQ? Either way it seems you’re contradicting your own point. In games where native sucks then it is accurate to say that DLSS is better than native.

Your argument is that DLSS can never beat a theoretical ideal native render and nobody disagrees with that.
 
Isn’t Control also considered a showcase for DLSS IQ? Either way it seems you’re contradicting your own point. In games where native sucks then it is accurate to say that DLSS is better than native.

Your argument is that DLSS can never beat a theoretical ideal native render and nobody disagrees with that.

Control is a case where IQ loss is very minimal. But there is an IQ loss and it's quite easy to see it. Of course in the middle of action it's difficult to pay attention to it. Literally the only case where you can say DLSS is better than "native" is Death Stranding, period (and this only in static images, because it seems there are some issues in motion every now and then, and in cutscenes, as reported by Ars Technica). And this has quite precise motives. In all other cases, Control included, this is simply not true and thus it is not representative.
 
Maybe it's time to chill. It's not like anyone here is going to change their opinions if they already didn't do so. At least not unless some concrete new evidence is presented. CyberPunk2077 and bloodlines, those are the dlss titles I'm looking forward to seeing in action. I'm hoping my 3070 can run those games without dlss @1440p maxed. If not, then dlss2.0 becomes choice I might have to make depending where the best quality compromise for me is found.

Blast from the past

upload_2020-12-1_14-51-43.jpeg

998.jpg
 
Back
Top