AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
I asked, because in the previous image DLSS was on the left and FSR on the right. Also, the review at PurePC shows, that DLSS has IQ issues - leaves a lot of jagged edges, e.g.:

https://www.purepc.pl/misc/img_comp...pl/files/Image/red_temp/1652594278_16_ncb.jpg

I bet they are doing only stillstand images. That is best case for FSR and DLSS. I used Afterburner for capturing a picture while movement from the framebuffer.
Here is an example at 60FPS without FSR Sharpener: Imgsli

I find FSR 2.0 performance at 60fps not usable. These artefacts are more distracting than at >100fps. Here is an example with a moving enemy: Imgsli

/edit: Captured the difference for moving objects at 60 fps between DLSS and FSR Performance in 4K:
Imgsli
 
Last edited:
Also, the review at PurePC shows, that DLSS has IQ issues
Usually gradients vanish in bright areas of image when AA operates in HDR space before tonemapping, then gradients are usually destroyed after quantizing color to the LDR space, might be the case here
 
Usually gradients vanish in bright areas of image when AA operates in HDR space before tonemapping, then gradients are usually destroyed after quantizing color to the LDR space, might be the case here
Cables of the hanging lamps are also affected, so it seems to be related not only to bright areas.
 
Cables of the hanging lamps are also affected, so it seems to be related not only to bright areas.
Still looks like the HDR resolve issue to me. Accumulating details in LDR space has other drawbacks)

BTW just noticed weird texture posterization near the camera in the purepc's FSR 2.0 Balanced and Performance screens.
 
I'd expect that HDR-related issue would affect all the edges similarly (some are fine, some are aliased), and quality wouldn't be affected by DLSS quality-settings. The cables are looking much better, if DLSS is set to quality (and worse if set to performance).
 
I'd expect that HDR-related issue would affect all the edges similarly (some are fine, some are aliased), and quality wouldn't be affected by DLSS quality-settings.
Looks only logical to me, DLSS quality should have more samples to work with (if jittering sequence has the same length), so it can produce more gradients, so there are more chances these gradients would survive tonemapping and again there is more information loss in high contrast areas, such as on the black wires in front of the bright window, that has always been the case with HDR AA resolve before tonemapping to LDR.
 
Last edited:
These are wrong numbers since 1440p resolution scene here is very likely CPU bound, you need a GPU utilization counter to make sure scene is not CPU bound, which is available on the next page - https://www.techpowerup.com/review/amd-fidelity-fx-fsr-20/3.html
GPU utilization here is 97% in 1440p, which indicates GPU bound scene.
DLSS Quality on 3080 takes 1.01 ms for 4K reconstruction vs No AA 1440p
FSR 2.0 Quality on 3080 takes 1.66 ms for for 4K reconstruction vs No AA 1440p
You can search for DLSS integration guide where execution times are highlighted for different RTX models, the calculations above align with DLSS execution time numbers in this guide.

I actually didn't realize there was a second page of data for TPU to use. Was just throwing some data interpretation based on what I thought we had to work with.

I remember the DLSS dev guide but I thought that overhead on that really only covered the actual DLSS operations (I believe it was discussed in the thread on here to that extent) while not including the other elements that would add overhead compare to running it at the actual resolution it's being scaled from (eg. higher LODs).

Alex tested a 1060 vs a 580. Despite the 580 having almost 50% more FP32 compute and 800% more FP16 compute a 1060 runs the algorithm faster.

I didn't realize at the time that @Dictator at DF had already posted their content, which also means my question to him is answered. So we do have more data, when I was posting that I thought it was really only the HUB data that was available so far.
 

These guys are doing some funny testing there. Installed the game to have a look at their bizzare results. Use their very high settings. 1440p. Went to the same area where they got this:

https://www.purepc.pl/misc/img_comp...pl/files/Image/red_temp/1652594404_16_ncb.jpg

What i got was this:

https://imgsli.com/MTA4MTEw

plus extra flickering on the lines with fsr

Then i noticed, the lines break when you look up, but are solid if you look down.



Same is happening with FSR, but more severe:



So you could adjust the results by doing this:

https://imgsli.com/MTA4MTEy

And whats up with their absurd aliasing with DLSS on the circular light in the centre of the room ? Why is it missing entirely in my pics ?

https://imgsli.com/MTA4MTE0
 
And whats up with their absurd aliasing with DLSS on the circular light in the centre of the room ? Why is it missing entirely in my pics ?
Is there HDR setting in the game? Probably they tested the game in HDR mode.
There are also weird posterization artifacts on the floor with FSR 2.0 Balanced and Performance modes in their review, which I've not seen in other reviews or on your screens.
 
Im not seeing a HDR option in the menu. But seeing as the images change when you render with dlss/fsr depending on your position and none of their comparison pics are exactly the same, makes you wonder if its intentional on their part. How hard is it to just not move when testing two simple options ?
 
Picked up Deathloop on the Epic sale. My system: i5 12400, RTX 3600, playing on a 55" 4K. Actually decently impressed with FSR so far, and I'm using dynamic mode with a target of 60fps at 4k, so it's likely dipping down into performance mode territory at points. Equal to DLSS? Nah. DLSS is definitely more stable, and with added sharpening gets the sharpness of the FSR mode's textures, but still has less edge aliasing artifacts and keeps thin lines connected.

However, in choosing dynamic res with reconstruction, my very unscientific and dirty quick test was basically to see if it could meet my minimum standard, that being - is FSR 2.0 at least better than say, checkerboarding or interlaced 4k? And at this point, I'd say overall - very likely.

Part of the reason for this positive impression too may be that Deathloop just has a good DLSS implementation all around, I haven't run into any 'gotchas' like I have with other games where DLSS 2.0+ was patched in later and depth of field or motion blur can cause a burst of shimmering/low-res textures in spots with DLSS in those titles. It's pretty consistent in my tests, and even though FSR has more shimmering overall, I haven't run into an effect - yet - where it really 'announces itself' like DLSS has in some games that got patched as opposed to being designed with it in mind with plenty of time until release.

So I think having another option that requires the same focus on potential problem spots will help DLSS going forward as well, just more attention to reconstruction from the outset is a 'good thing' for everyone.

So In summary (on this one game I've played for 30 mins :)), I'd say I wouldn't like if the option of DLSS was excluded due to FSR 2.0 on a future game based on what I've seen for sure. But...if the only choice for a game's reconstruction method was FSR 2.0 being implemented at AMD's behest and say, Interlaced mode like in the Resident Evil series - I'd say just having FSR would be a pretty noticeable step up, provided it's given the attention it requires. I wouldn't say perhaps that FSR 2.0 would be better than something like Insomniac's excellent TAA reconstruction, it may be inferior in spots - but it certainly appears to be at least better than many others in my limited time with it, and for an open standard on the PC I'm decently happy with that. Sure low expectations and all, but I'm happy enough with another option.

Also, praise (again, very early so we'll see if it holds up) for Deathloop's dynamic res with DLSS/FSR btw. I hope this becomes commonplace, I appreciate dynamic res in general when it's not too obvious, and as such am disappointed when often a PC version of a game doesn't have it, or just has a poor implementation that doesn't scale well. With DLSS/FSR it works exceptionally well here.
 
Last edited:
I'd say I wouldn't like if the option of DLSS was excluded due to FSR 2.0 on a future game based on what I've seen for sure
This didn't happen with the previous "DLSS killer" in the form of FSR1 and I will be surprised if it will with FSR2 as adding DLSS or FSR2 to a game which supports one of them should be a very easy task.
 
I'm fairly sure the only people that called FSR1 a "DLSS killer" were a few AMD fanboys and "journalists" with clickbait headlines. No one with a brain thought that an image scaler would come close to DLSS.
 
Back
Top