AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Nvidia's marketing didn't have to work very hard to sell that story. After all we've had GPUs that could do spatial post-processing for decades now. So it's reasonable to expect that some special sauce has been missing.
They worked pretty hard to make people forget that temporal reconstruction upscalers predate DLSS for more than a couple of years, with pretty decent results.



Are you sure you’re not referring to the skepticism about it being spatial only? That doesn’t have anything to do with AI.

Yes I'm sure the wccftech article I specifically mentioned is mostly groveling over DLSS using "AI" and they present it as the reason why FSR "can't come close".

The hint is in the name "DL"SS, Deep Learning Super Sampling
FSR does not use any machine learning or inference and while it is an amazing tool to have in the absence of a DL system - it is not comparable in any way to an AI-powered image upscaling system.
The former will always have a quality cost associated with it while the latter can actually get to a point where it would be impossible to see differences between native and AI-upscaled images. With the non-DL implementation AMD has rolled out with FSR, you are looking at quality that is worse than DLSS 1.0 on the highest preset. Performance presets should impact quality even more.
 
Last edited by a moderator:
Yes I'm sure the wccftech article I specifically mentioned is mostly groveling over DLSS using "AI" and they present it as the reason why FSR

I think this assumption can be put to bed when someone creates a non DL based upscaling solution that matches the quality and performance ratio of DLSS 2.1. Until that time it seems like a reasonable assumption to make.
 
We actually have productized examples of every point in the Cartesian product of the { NonDL, DL } * { SpatialOnly, Temporal } sets.

NonDL, SpatialOnly = Monitor/GPU upscaling (and the upcoming FSR)
DL, SpatialOnly = DLSS1
NonDL, Temporal = Checkerboarding, DLSS1.9 (and the upcoming Unreal TAAU)
DL, Temporal = DLSS2

Maybe I missed some. I'm ignoring CAS in this taxonomy because it's a sharpening filter that can be applied on top of any upscaling algorithm.
 
The only info we have is from Ryan’s tweet that AMD told him that it’s spatial only. We don’t have any info about whether it uses DL or not.
Here's Ryan's statement:
In our pre-briefing with AMD, the company did confirm that FSR is going to be a purely spatial upscaling technology; it will operate on a frame-by-frame basis, without taking into account motion data (motion vectors) from the game itself.
 
Pretty disappointing to be honest. I'm sure it wont be any worse than what Nvidia had with DLSS1 in the beginning... but the problem is that back then, there was no competition and nobody knew how good things were going to get. Now, we have DLSS2.1 and the bar is set... so even if AMD has something that's passable until they can get a "true" DL/ML based solution out there and possibly hardware accelerated in the future.. it's going to constantly be compared to DLSS.

It's an uphill battle... but I was one of the people who believed in, and defended DLSS1 and Nvidia's claim that it would get better over time... and I'll afford AMD the same courtesy.
 
Pretty disappointing to be honest. I'm sure it wont be any worse than what Nvidia had with DLSS1 in the beginning... but the problem is that back then, there was no competition and nobody knew how good things were going to get. Now, we have DLSS2.1 and the bar is set... so even if AMD has something that's passable until they can get a "true" DL/ML based solution out there and possibly hardware accelerated in the future.. it's going to constantly be compared to DLSS.

It's an uphill battle... but I was one of the people who believed in, and defended DLSS1 and Nvidia's claim that it would get better over time... and I'll afford AMD the same courtesy.

Its the same with the ray tracing tech, both will improve alot with their next generation GPU's, its their first iteration for now just like NV had 3 years ago in 2018.
 
Alright, so looking at normal 6800xt performance numbers with Godfall, I think I can reconstruct the upscale setting to something like 2:1 (might be a bit bigger even), 3:1, 4:1, 5:1 and for FSR performance to be really quite good, but only on RDNA2 so far (performance deficit from those normal resolutions are small).

Thus the screenshot example for 1060 would have to be upscaled from about 810p. Which, well, no wonder it's so blurry. And as with DLSS it'll get less and less effective the lower the final resolution is. Maths! We're looking at scene complexity being fixed, upscaling when you already have a good set of samples of the scene complexity reconstructing a better version is relatively easy. Once you start dropping important parts of the scene it gets progressively hard to guess what they are anymore, and your effectiveness goes down. IE reconstructing to 4k will give better results than reconstructing to 1440p at the same quality settings.

If "Ultra" really is fairly close to native quality for 4k then I can see it becoming pretty popular with small to medium studios as a PC option, and especially for use on the new consoles. I would guess DLSS 2.0 has better image quality results, but also just takes more performance to run, limiting the comparative benefit. Glancing at benchmarks on say, a 2080ti and balanced settings, gaining a 35% performance boost on Watchdogs Legion for a hit to image quality with DLSS may not be a whole lot different from gaining say a 50% performance boost and a slightly worse hit to image quality with FSR. Of course, this assumes Ultra quality 4k really is at least a decent approximation of native 4k.
 
Last edited:
Pretty disappointing to be honest. I'm sure it wont be any worse than what Nvidia had with DLSS1 in the beginning... but the problem is that back then, there was no competition and nobody knew how good things were going to get. Now, we have DLSS2.1 and the bar is set... so even if AMD has something that's passable until they can get a "true" DL/ML based solution out there and possibly hardware accelerated in the future.. it's going to constantly be compared to DLSS.

It's an uphill battle... but I was one of the people who believed in, and defended DLSS1 and Nvidia's claim that it would get better over time... and I'll afford AMD the same courtesy.

It had competition like checkerboard and the "RIS" itself that looked better. If FSR could be better than RIS and DLSS 1.0 is a big win for me. It would be easier to apply in games and would work in a lot of hardware, including consoles, something that DLSS never would be able to offer.
 
It would be easier to apply in games and would work in a lot of hardware, including consoles, something that DLSS never would be able to offer.
Current thinking is it requires a bit of work to apply to games, likely the reason why Nvidia/Intel would have to dedicate resources to studios to optimize FSR. So definitely would take more effort than DLSS (done in a few days).
At this point AMD is not disclosing which games will support the technology, but the messaging right now is that developers will need to take some kind of an active role in implementing the tech. Which is to say that it’s not sounding like it can simply be applied in a fully post-processing fashion on existing games ala AMD’s contrast adaptive sharpening tech.
 
It had competition like checkerboard and the "RIS" itself that looked better. If FSR could be better than RIS and DLSS 1.0 is a big win for me. It would be easier to apply in games and would work in a lot of hardware, including consoles, something that DLSS never would be able to offer.
RIS wasn't competition at all... let's be honest.

The reason why FSR exists... is because RIS was not competition, it was a sharpening filter, not an upscaler.
 
Current thinking is it requires a bit of work to apply to games, likely the reason why Nvidia/Intel would have to dedicate resources to studios to optimize FSR. So definitely would take more effort than DLSS (done in a few days).
Bit of work doesn't mean it's more than DLSS2.0 (unless using engine with built in support). It also fits all engines, not just those with support for TAA and thus motion vectors
RIS wasn't competition at all... let's be honest.

The reason why FSR exists... is because RIS was not competition, it was a sharpening filter, not an upscaler.
Yes, RIS is just sharpening filter, yet it with naive scaling provided better quality than DLSS1.0 did at the time.
 
Yes, RIS is just sharpening filter, yet it with naive scaling provided better quality than DLSS1.0 did at the time.
I wonder whether you anticipate FSR winning over the naive scaling from higher resolution with RIS on top of that?) Because that's not going to happen. Higher resolutions simply have more texture details unless there is temporal reconstruction in the play. So the real question is whether FSR fast enough to not lag behind the native scaling from higher resolution.
 
I wonder whether you anticipate FSR winning over the naive scaling from higher resolution with RIS on top of that?) Because that's not going to happen. Higher resolutions simply have more texture details unless there is temporal reconstruction in the play. So the real question is whether FSR fast enough to not lag behind the native scaling from higher resolution.
I don't know if it will be better than naive+RIS or not and don't really care either, I expect it just like every other scaling method to date to degrade the IQ in ways I'm not comfortable with and will continue to turn down quality knobs to get the performance I want before touching any of them.
 
I don't know if it will be better than naive+RIS or not and don't really care either, I expect it just like every other scaling method to date to degrade the IQ in ways I'm not comfortable with and will continue to turn down quality knobs to get the performance I want before touching any of them.
Your stance is consistent, and I respect that.

But I don't know how long you'll be able to maintain that stance. Moore's Law is done. The appetite for better computer graphics isn't done, not by a long shot. And although we are reaching diminishing returns, there is still a little room for display pixel density to grow, at least in terms of adoption. Reconstruction brings a massive relief to this squeeze. We're not talking 10%, 20%, we are talking 50, 100%. That's multiple traditional GPU generations. Future games are going to assume everyone is going to use some kind of reconstruction in tweaking their target budget. This means if you clench your fists and refuse to enable reconstruction, you'll have to drop resolution to get a playable framerate, which is effectively a form of image scaling anyway (arguably the worst kind).
 
AMD to Debut Live FSR tech on June 22, With the release of the next-gen patches for XSX and PS5 consoles, that both make use of the the AMD FSR tech to run CP2077.
Showcasing 4K 60P Raytraced graphics on the Latest Consoles.

Thats what I want to hear on the 22nd!
I mean I know I'm probably dreaming, but it would be nice.
 
Back
Top