AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
The objective mathematical error metric is known and used for eternity: MSE.
Human perception isn't objective though, so people are looking for a objective subjective metric since quite some time. That that's not possible should be evident. If the game's pre-processing (tonemapping), the environment's mid-processing (viewer condition) and brain's post-processing of an image would happen to be "standardized", you could move MSE into post-proc space (basically brain-space before meaning is attributed). Unfortunately that's a folly.
If you can accept an objective perceptor (e.g. a computer), and would accept that it is a valid representation of the averaged subjective perceptions (e.g. it's at the center of the normal distribution of divergent perceptions), MSE is as good as it gets.

I want to push back on the latter part of your argument because it doesn't consider results of decades of research that has gone into image and video compression. Lossy compression techniques optimize image quality using objective metrics that have been chosen based on an understanding human perception. JPEG, for instance, uses the fact that people are much more sensitive to changes in brightness than to color. JPEG would produce worse results if it only considered MSE! This is something that could be applied for image quality measurement today: measure the dynamic range of the luminance and chrominance channels and compare the reconstructed and native images. That result would be an objective measure of image quality based on human perception.

The "problem" with image quality measurement is not that overall image quality is subjective, it is that we haven't figured out what aspects of image quality are important for understanding reconstruction results. "What GPU is best?" is also a subjective question, but tech enthusiasts have developed a set of objective measures that can help answer that question for many different people. Each individual measure helps any given person weigh what is most important to them. The same approach can (and should!) be used for image quality assessment. Image quality can be objectively measured with contrast, sharpness, blur, noise, dynamic range, total luminance, color accuracy, and ghosting and aliasing as mentioned above. Maybe some of these measures are useless, and maybe some are only as useful as the "AVG FPS only" era of GPU assessment, but eventually we can find the equivalent of the "1% lows" and the "frametime graphs" as we start to better understand what is important to answer our specific question. We don't need a single number (just like we don't use a single number to evaluate GPUs) of "Image Quality" to be able to make meaningful and quantifiable comparisons.

(Hi all, first comment. I've been a sporadic lurker for more than a decade but i really enjoy reconstruction tech so I figured today was as good as any for my first comment)
 
Okay I now understand what you were saying. I disagree though. If you look at the neural IQA work it's not as simple as (transform-space -> apply MSE). It's possible that the network is implicitly doing that, but if so it seems to have found a more effective latent space that leads to the MSE yielding better correlation with subjective perception.

I'm not sure what that algorithm does. If it does try to simulate humans, it sure isn't. If it focuses on a particlar subset of percievable differences, but otherwise is a straight algorithm, it's not subjective, just scoped.

Or it's found a different metric altogether. In any case I'm a little surprised by the force of your absolutist claims that this is an unsolvable problem. There seems to be active ongoing research on the topic.

Alchemists did a lot of experiments and research, quantity or motivation is not an indication that something's achievable, promising or correct. It's better if we look at the mechanism(s) and argue about that. My argument is purely semantic / philosophical, and it's absolutely fair and possible to conclude something absolute (it's not nessessarily true of course, or even provable. See Gödel).
I discuss the topic of metric for a long time with a lot of researchers, including people involved in it. It's really hard for me to condense these discussions, or act as a representant of their thoughts and arguments. But my personal perception is, that there's a lot of awareness and recognition of the problems / ambiguities of the definition and limits of it's practicality.

I suggested that universal and objective is incompatible with whatever you want to wrap under "subjective". A objective metric maintains a static definition that's invariant over conditions or time. A subjective metric must by definition disregard some aspects for an emphasis on other aspects in regards to a constrained set of observers, often specifically towards "black box" perception (the figurative kind, not the literal biologic one).
People's perception especially is temporally volatile, for social, education and biological reasons. They look at a picture multiple times and see different problems each time, often enough. Or vice versa, they look at a picture multiple times, and each time they find more problems because their sensitivity goes up.
A map from condition context to simulated human perception / evaluation (like SSIM) is very problematic. It changes over time, age and so on. How do you want to weight these individual's contributions?

Additionally, there's a problem in the math, when we reduce for example a 2d map to a scalar. The reduction contains a lot of undecidability problems. Is a shallow difference over all values worst than absolutely no difference, besides a complete erasure of content in the bottom right corner? For a human?
 
A map from condition context to simulated human perception / evaluation (like SSIM) is very problematic.
Unsurprising since human-written algorithms have been shown to be stunningly inept at cognitive tasks. Even a puny toy network like AlexNet massacred the ImageNet 2012 contest. Modern CNNs and Transformers are in a completely different league.

Additionally, there's a problem in the math, when we reduce for example a 2d map to a scalar. The reduction contains a lot of undecidability problems. Is a shallow difference over all values worst than absolutely no difference, besides a complete erasure of content in the bottom right corner? For a human?
This is a very basic question and you know as well as I do that there isn't an easy answer. There are way more (and more important) variables than the ones you mentioned. This is exactly why human-written heuristics are worthless for these tasks. We innately know what looks good but are hopeless at transcribing those intuitions into code. NNs attempt to reproduce those intuitions and based on what we're seeing, are damn good at doing so for certain classes of problems. I don't have enough experience to conclusively claim if our specific problem falls into these classes, but the discriminator network in GANs are effectively solving similar problems already.
 
This discussion is getting head-thumpingly hard to follow at times for me, so thanks. I feel like I'm learning a lot just trying to understand it all. :D
 
Tried FSR 2.0 in Farm Simulator 2022 and the implementation is really bad. It shows every problem which has been seen in Deathloop and amplifes them:
Ghosting, artefacts, overlapping geometry problem etc.
Some examples with FSR and DLSS performance:
Ghosting: Imgsli
Artefacts with overlapping geometry: Imgsli
Combination of both: Imgsli

This game doesnt seem to be upscaling ready but FSR 2.0 is like a magnification glass for these engine problems...
 
Last edited:
Tried FSR 2.0 in Farm Simulator 2022 and the implementation is really bad. It shows every problem which has been seen in Deathloop and amplifes them:
Ghosting, artefacts, overlapping geometry problem etc.
Some examples with FSR and DLSS performance:
Ghosting: Imgsli
Artefacts with overlapping geometry: Imgsli
Combination of both: Imgsli

This game doesnt seem to be upscaling ready but FSR 2.0 is like a magnification glass for these engine problems...
You able to test DLSS?
 
Interview Nic Thibierozon on FSR 2.0 - (pcgameshardware.de)
May 29, 2022
We also received an invitation to interview Nicolas Thibieroz from AMD. As Director of Game Engineering, Nic is not only responsible for the development of FSR and the entire Fidelity FX program, but also leads AMD's developer relations team, which is the link between AMD's hardware and effects developers and the game developers who want to use AMDs hardware and effects. But let's have Nic himself have his say. This is the basic english transcript we used.
...
It is definitely higher cost compared to FSR 1.0, but we spent a lot of time optimizing that algorithm - like a lot! And then the point where we ended up doing multiple permutations based on different hardware architecture, not only across different vendors, but even across different generations. So multiple quick paths in the algorithm to make sure you get the highest performance.
 
Last edited by a moderator:
DLSS Quality is 25% faster than FSR 2 Quality on a 3060Ti @4K.


Deathloop-FPS-3840-x-2160-DX12-Max-Settings-DXR-on-7.png


On a 3080Ti, DLSS Quality is only 10% faster than FSR Quality though.

Deathloop-FPS-3840-x-2160-DX12-Max-Settings-DXR-on-8.png


All while retaining higher IQ than FSR 2.

Now to the last and probably decisive question: Is FSR 2.0 a DLSS killer? From my point of view, there is a very clear – NO! And for two reasons. First of all, especially in lower resolutions like 1080p or 1440p performance, DLSS 2.3 is simply even better.
https://www.igorslab.de/en/amd-fsr-2-0-and-nvidia-dlss-2-3-in-direct-practice-comparison/9/
 
Maybe its something to do with memory bandwidth given that the 3060ti drops off a cliff going from 1440p->4:
1080p/147fps
1440p/100fps (+3.2ms 1080p->1440p)
1080p+dlssP_4k/68fps (+7.9ms overhead)
1080p_fsr2P_4k/63fps (+9ms)
1254p+dlssB_4k/60fps (+8.4ms (interpolated))
1440p+dlssQ_4k/59fps (+6.7ms)
1270+fsr2B_4k/56fps (+9.5ms)
1440+fsr2Q_4k/47fps (+11.3ms)
4k/37fps (+17ms 1440p->4k)
Both fsr2P&B_4k are 1.1ms slower than dlssP&B_4k, but the quality mode is a huge outlier in that the dlss overhead is unexpectedly low and difference is 4x as high as the other modes. At 2k fsr2P/B/Q is .43ms/.47/.52ms slower than dlss.

the 6700xt has nearly the same native output at 1080/1440p, but does much better with upscaling
1080/153fps
1440p/98fps (+3.6ms 1080p->1440p)
1080p+fsr2P_4k/98fps (+3.6ms,<half that of the 3060ti)
1270p+fsr2B_4k/83fps (3.9ms)
1440p+fsr2Q_4k/72fps (3.7ms)
4k/47fps (+11ms 1440p->4k)
 
Seems worth it to me as the work should just intergrate right into the new GOW on both PS5 and pc releases.
I think it's more of an indication that these early FSR 2.0 releases are somewhat of a real world beta testing of 2.0.
We'll likely see these "3 days" only after the source code release.

Checking GoW out and it seems that they've actually "upgraded" FSR from 1.0 to 2.0:

Screenshot2022053122.png


No FSR 1.0 option anywhere I can find.

So on static scenes FSR2 Q is very close to native, only a tad softer. P on the other hand is noticeably blurrier, likely too much to be countered with a sharpening post.
DLSS is noticeably sharper than both native and FSR, P is leagues better than respective FSR2 mode by default. Q is a matter of preference, it's sharpness can probably be equalized with sharpening tweak.
The scene I've tested (the very first one after you start a new game) gives me on a 3080:
With DLSS P actually being usable here while FSR2 P is rather blurry.

Attached PNGs to numbers above.

Another interesting point: FSR2 modes consume ~0.5GB more VRAM than respective DLSS modes here. All in all it's still an easy choice on an RTX GPU.
 
Last edited:
Back
Top