AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Meaning Nvidia needs to provide developers with an optimized build of FSR? Fat chance of that happening. I’ll be shocked if they lift a finger.

Given that it’s open source hopefully we get the low down soon on how FSR improves on other spatial upscaling algos.

Raja's comment is interesting. He refers to Xe's DL capabilities, is there a DL element to FSR?

Your not interpreting his tweet correctly.
He's saying their upscaling will use DL and be open source, not their implementation of FSR.
So he thinks their approach will lead to better quality and performance.
 
Your not interpreting his tweet correctly.
He's saying their upscaling will use DL and be open source, not their implementation of FSR.
So he thinks their approach will lead to better quality and performance.
You're missing context from the post Raja was replying. He was asked specifically if we'd see AMD FSR on Intel hardware.

 
You're missing context from the post Raja was replying. He was asked specifically if we'd see AMD FSR on Intel hardware.

He was asked then answered with looking at it, then put hyphen talking about their approach in comparison and will also align with the open approach that FSR is taking.
That's how I read it anyway.
Especially based on what we know of FSR and ML etc.
 
I'm not sure why FSR wouldn't appear on future Intel cards - or any Intel iGPU since Skylake. If FSR is capable of working on Pascal/Polaris GPUs why wouldn't it work on Intel GPUs with similar feature sets?

The original question seem odd to me.
 
Considering Intel is a completely new player in the market (i740 doesn't really count), it's 100% in Raja's interests to support an upscaling method that is IHV-agnostic.
Taking away Nvidia's hold on image upscaler mindshare (developers and gamers alike) is probably even more important to Intel than it is for AMD.

Besides, Raja said they're "definitely looking at it" when asked directly by a former colleague. and then proceeded to confirm they're willing to align with open standards.
This leaves very little to interpretation.


It's fantastic how this guy wrote an entire op-ed based on the premise that FSR isn't using a neural network, which is an information he definitely doesn't have and contradicts the only credible data we have on FSR, which is the patent described in the very first post of this thread:

upload_2021-5-20_11-9-52-png.5491
 
Besides, Raja said they're "definitely looking at it" when asked directly by a former colleague. and then proceeded to confirm they're willing to align with open standards.
This leaves very little to interpretation.
Where as I think it's pretty clear he's talking about their own solution with
the DL capabilities of Xe HPG architecture do lend to approaches that achieve better quality and performance.
Better approaches than FSR which is what he was asked about.

So until it's been clarified I'll believe this interpretation. And more open solutions the better in my books.
I agree Intel will want a solution that can work on anything, for the same reason AMD does.
 
I ran a little experiment on the high-res press-kit image for the 1060 comparison that AnandTech shared. Weirdly, even though it was supposed to be a 1440 render the image size was a very high 8333x4687. So I bicubic resized it down to 1440p. This is that resized image, which is the baseline for our experiment:



Next, I did the following:
  1. Snipped out the left (native 1440p) half
  2. Nearest-neighbor resized it to 1080p to emulate a native 1080p render. Note that a bicubic resize would be cheating since it would be effectively performing SSAA). Also note that nearest-neighbor resampling a 1440p->1080p is likely worse than what a native 1080p sample would have been. That's fine -- I'm trying to be conservative.
  3. Bicubic resized it back to 1440p to emulate a dumb monitor/GPU scaling.
  4. Re-pasted the edited half into the comparison shot (in place of the native 1440p).
Here is the result:



Is my crude bicubic upsample from pseudo-1080p better or worse than FSR? I think it trades blows. Clearly it has more aliasing, but it's sharper and has more texture detail (look at the floor tiles). I think the aliasing will be better in a real 1080p sample (instead of my crude attempt), and I'm sure the remaining can be cleaned up somewhat by a filter, bringing it closer to the FSR image.

Bottom line is, based on this experiment and this specific image, FSR doesn't seem to be doing a great job at reconstructing anything.

Maybe FSR is upsampling from an even lower resolution than 1080p? In that case the performance uplift should be a lot higher -- based on some cursory search a 1060 should see ~60% uplift on Godfall going from 1080p to 1440p at Ultra quality (yeah I know this is Epic, but the uplift mentioned here is only 41% so there's enough headroom for the conclusion to still stand).
 
Considering the performance gains of an RDNA2 card, sub 1080p rendering for the example "Quality" mode is nigh guaranteed, probably almost at low as 900p. But FSR uses deep learning, and while RDNA2 can use int8, the 1060 there will be stuck with 32bit and thus quite probably take 4x longer for the upscaler to work.

"Showing it off" on a 1060 was just a tiny PR stunt to rub Nvidia's nose in it a bit, rather than anything totally practical. Of course that makes FSR's practicality limited for AMD as well, and Ultra quality the easy choice for most applications, but of course they're not going to say any of that.
 
Considering the performance gains of an RDNA2 card, sub 1080p rendering for the example "Quality" mode is nigh guaranteed, probably almost at low as 900p. But FSR uses deep learning, and while RDNA2 can use int8, the 1060 there will be stuck with 32bit and thus quite probably take 4x longer for the upscaler to work.

"Showing it off" on a 1060 was just a tiny PR stunt to rub Nvidia's nose in it a bit, rather than anything totally practical. Of course that makes FSR's practicality limited for AMD as well, and Ultra quality the easy choice for most applications, but of course they're not going to say any of that.
1. Pascal has fast INT8.
2. Using INT8 to spatially upscale FP16 image seems like a weird idea.
 
"Showing it off" on a 1060 was just a tiny PR stunt to rub Nvidia's nose in it a bit, rather than anything totally practical. Of course that makes FSR's practicality limited for AMD as well, and Ultra quality the easy choice for most applications, but of course they're not going to say any of that.
Though the specific case of a 1060 was a bit odd. The lack of 2xFP16 throughput in Pascal is probably giving those chips a lower performance boost than if they used a low-end Turing (1660, 1650).
 
Though the specific case of a 1060 was a bit odd. The lack of 2xFP16 throughput in Pascal is probably giving those chips a lower performance boost than if they used a low-end Turing (1660, 1650).
Maybe as they said its the most common on steam survey, plus the following:
It shows it works reasonably, up to Nvidia to optimize further if they choose to.
But it also gives AMD nice performance lead which is what they want.
 
Maybe as they said its the most common on steam survey, plus the following:
It shows it works reasonably, up to Nvidia to optimize further if they choose to.
But it also gives AMD nice performance lead which is what they want.

RDNA2 also does better at lower resolutions so upscaling plays right into that advantage. It’s a double whammy. Render fewer pixels and also be faster when rendering fewer pixels vs the competition.

The DLSS vs FSR comparisons at the same internal resolution will be really interesting.
 
AMD FidelityFX Super Resolution in Ultra Quality shows visible quality loss (guru3d.com)
June 3, 2021
All the curiosity made a Reddit user grabbed the stills that AMD showcased, where the FSR performance advantages were shown at 4K resolution capturing of a few frames of the technology comparing 'Ultra Quality' (the one with the least loss of graphic quality. ) in the form of a BMP file so that it does not lose quality. First the different modes of FSR technology as shown by AMD:
  • 4K Native Resolution - 49 FPS
  • FSR Ultra Mode - 78 FPS
  • FSR Quality Mode - 99 FPS
  • FSR Balanced Mode - 120 FPS
  • FSR Performance Mode - 150 FPS
The problem, the least aggressive option of FidelityFX Super Resolution, the 'Ultra Quality' mode, possibly shows a noticeable reduction in the visual quality of the game. We can't call the comparison fair just yet as the tech is not finished, and hey .. we need actual hands-on testing on our side ourselves.

AMD FSR en Calidad Ultra - ECI - Imgur
 
Last edited:
Not sure if this has already been discussed but presumably FSR only does upscaling and not anti-aliasing and it’s essentially a post processing step after TAA is applied. Any comparison would need to take this into account. i.e. TAA+FSR vs DLSS.

Something like this (borrowed from UE4 upscaling docs):

Spatial-Upscale.jpg


For comparison DLSS happens much earlier in the pipeline (same spot as TAAU). Downside is post-processing steps (blur, bloom, tonemap) are done at a higher resolution hence lower performance. Upside is potentially better IQ.

Spatial-And-Temporal-Upsample.jpg
 
Last edited:
Not sure if this has already been discussed but presumably FSR only does upscaling and not anti-aliasing and it’s essentially a post processing step after TAA is applied. Any comparison would need to take this into account. i.e. TAA+FSR vs DLSS.

Something like this (borrowed from UE4 upscaling docs):


Ye so its not a DLSS2 competitor to begin with.
 
This is getting ridiculous. I didn't realize fsr was out and tested yet.

Did I actually see someone here post their own photoshop upsampling and comparing to fsr? and getting approval? And speculative articles based on opinions? as if it proves what fsr is and does?

This just demonstrates the uphill battle any feature from AMD has to cross before its even tested. Common sense is out of the window, and I have never seen such a concentrated effort to kill something that has yet to be reviewed.
 
Back
Top