AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Dave ain't no nVidiot, he's just posting up facts. He ain't saying FSR 2.0 sucks, just nVidia's solution is faster which it is. (Gonna call me an nVidia fanboy? :p )

EDITED BITS: Does anyone else find it amusing that the unreadable text looks clearer through DLSS 2.0 even though it's still illegible?

Its perplexing as their platform wont even see widespread useage of fsr2 neither dlss. They already have very good custom solutions.
 
Well DF's video certainly put things into a somewhat different perspective compared to HBU. There's some pretty egregious artifacting, especially in performance mode compared to DLSS - and those are quite relevant to Radeon owners considering the higher performance cost of it (atm) on AMD GPU's, surprisingly.


Yup. Points 3 to 7 are the typical areas with issues.




A competent showing regardless. Its just going to depend from player to player with what of this issues he can live to try and get playable performance with older hardware. But its probably not gonna matter for people with stuff like 1060 or 580, since lowering details is pretty common at this point with that level of hardware
 
Pretty comprehensive, IMO.


So DLSS is slightly better at 4k Quality and obviously better at any other resolution or quality level.

That said, both DLSS 2.x and FSR 2.0 are not up to my personal quality standards in this game. Basically, I'd reduce quality settings for more performance in this game rather than introduce temporal accumulation artifacts from both DLSS and FSR.

Regards,
SB
 
So, frametime wise, DLSS 2 is up to twice as fast as FSR 2 on NVIDIA GPUs.
Using Quality mode, DLSS 2 delivers better ghosting, animation, thin lines, hair, and all around particles and transparencies compared to FSR 2.
Using Performance mode DLSS 2 brings a significantly better upscaling to all aspects of the image, that it's on a completely different league, compared to FSR 2's, which quickly breaks up at this resolution.

And some people still have the audacity to claim AI upscaling is worthless, based on some half-assed and rushed comparisons! Based on this one sample, FSR 2 is still NOT equal to DLSS 2, Period.
 
Last edited:
DLSS looks better that native+TAA in Deathloop IMO. I'd expect FSR2 to be similar with more artifacting and breakup.
Probably overall, but I find from using DLSS that native TAA is just a little more consistent in motion on several titles, there's rarely any chance of hitting an effect/surface where the illusion of DLSS can break up, sometimes rather harshly (some reflective surfaces in Wolfenstein: Younglood, foliage behind smoke/some fog in HZD, some lights on buildings leaving trails in Death Stranding for example).

I still use DLSS for those titles as overall they provide better image quality in many aspects, but that can also make those incidents where it breaks down even more evident. It's a very subjective thing of course and it's going to bother some people more than others, and obviously depends on the game.
 
That said, both DLSS 2.x and FSR 2.0 are not up to my personal quality standards in this game. Basically, I'd reduce quality settings for more performance in this game rather than introduce temporal accumulation artifacts from both DLSS and FSR.

Regards,
SB
Nice to know I'm not the only one
 
Probably overall, but I find from using DLSS that native TAA is just a little more consistent in motion on several titles, there's rarely any chance of hitting an effect/surface where the illusion of DLSS can break up, sometimes rather harshly (some reflective surfaces in Wolfenstein: Younglood, foliage behind smoke/some fog in HZD, some lights on buildings leaving trails in Death Stranding for example).

I still use DLSS for those titles as overall they provide better image quality in many aspects, but that can also make those incidents where it breaks down even more evident. It's a very subjective thing of course and it's going to bother some people more than others, and obviously depends on the game.
My experience is the opposite really. TAA tend to produce lots and lots of artifacts while DLSS clean them up most of the time leaving just some which are well known by now. It does depend on a game and how well the integration was made though.
 
My experience is the opposite really. TAA tend to produce lots and lots of artifacts while DLSS clean them up most of the time leaving just some which are well known by now. It does depend on a game and how well the integration was made though.
You really need to specify which version of TAA(U) you mean since DLSS is one, too. In case of q Vs q in deathloop, according to tpu dlss has more artifacts (except ghosting, which for has more of)
 
You really need to specify which version of TAA(U) you mean since DLSS is one, too. In case of q Vs q in deathloop, according to tpu dlss has more artifacts (except ghosting, which for has more of)
I mean TAA, as in AA with native resolution. There generally is only one "version" of that in each game.
 
Alex doesn't mention this in his video but there's a very noticeable dis-occlusion reconstruction breakup artifact with FSR2 here:

Screenshot2022051401.png


Looks VERY similar to what DLSS 1.9 was showing in Control in a similar situation with a rotating fan and alpha transparency.
 
Dave ain't no nVidiot, he's just posting up facts. He ain't saying FSR 2.0 sucks, just nVidia's solution is faster which it is. (Gonna call me an nVidia fanboy? :p )

EDITED BITS: Does anyone else find it amusing that the unreadable text looks clearer through DLSS 2.0 even though it's still illegible?

DLSS have a weird ability to clean up texts.

Someone tested with control at super low resolutions. Everything looks blurry except texts.

Dunno that's by design or due to the ML model
 
Video in the middle also has some glaring grainy artifacts trailing the antenna.
Typical thing with TAA -- that grainy stuff is the last frame without any samples aggregation (1 spp, no AA) in disocclusion region (these samples were occluded in previous frame).
I've read somewhere that FSR 2.0 should blur image in such areas as all devs usually do for TAA, but it seems FSR's blur fails in this case.
 

Unfortunately it seems like there is only 1 GPU that was tested which used 4K and 1440p resolutions which was the RTX 2080. So this is the only sample in that in which we can compare the scaling overhead against native.

RTX 2080 avg Frame times in milliseconds -

1440p Native - 13 (77 fps)
FSR 2.0 Quality 4k- 18.5
DLSS Quality4k - 16.7

This means the avg frame time cost for FSR 2.0 is 5.5ms, DLSS is 3.7ms.

RTX 2080 1% Frame times in milliseconds -

1440p Native - 18.2 (55 fps)
FSR 2.0 Quality 4k- 23.3
DLSS Quality 4k - 21.7

This means the 1% frame time cost for FSR 2.0 is 5.1ms, DLSS is 3.5ms.

So at least for the RTX 2080 in this test case the overhead for FSR 2.0 Quality 4K is about 50% higher than DLSS Quality 4K, which means DLSS is 50% faster or FSR 2.0 is 33% slower depending on perspective.

However at least my rough gauge of the overall data gives me the impression that FSR 2.0 might be most dependent on the available amount of FP32 resources relative to everything else. This would mean that Turing and by extension the RTX 2080 might be "weaker" (well we don't really have a baseline point) with respect to FSR 2.0 performance. This would however also does mean the interesting irony that FSR 2.0 performs better on Ampere than RDNA2, which at least the data for the 6800 XT and RTX 3080 in this test would corroborate.
 
Back
Top