AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
This DLSS to FSR2.0 mod stuff is pretty cool for AMD owners!

FSR1.0 in Cyberpunk was pretty decent huh.. I'm surprised. The FSR2.0 mod looks better and sharper however. Still... pictures only tell a half of the story as we well know. Still image comparisons are for the most part useless (unless you're showing large movements of motion in them) as both techniques accumulate frames over time. Relatively still images will produce largely similar results.. whereas the real meaningful testing comes from how they compare under stressful situations.

FSR2.0 will undoubtedly improve from here, and that's pretty exciting.

Looking at still shots in God of War, I can pick out aspects of images where I like the FSR2.0 resolve slightly more than DLSS2.0.. and other aspects where it's the opposite. However, in motion, DLSS is still clearly superior. It's all bout that image stability.. and if you play at high output resolutions, it's pretty incredible how stable DLSS looks. In a world where there's so many graphical techniques being utilized to improve performance, which can cause lots of shimmering, breakup, and dithering... technologies like DLSS are game changers.

It's not perfect of course... none of them are, but considering what we're asking of them, it's stupidly impressive.
 
I wouldn't compare TAAU with the second gen reconstruction technologies, such as TSR, FSR 2.0, DLSS 2.0+ and XeSS at all.
The second gen technics are built around minimizing losses by getting rid of the lossy color clipping heuristic, optimizing resampling, preserving and resolving subpixel details.
FSR still relies upon color clipping, so some of TAAU shortcomings are still here. The pixel locking heuristic is to resolve thin details, which are otherwise removed by color clipping, but it relies upon the depth disocclusion mask and can introduce ghosting artifacts on the inner geometry parts. Judging by DF's analysis in GOW, pixels can be locked on parts that are not affected by the disocclusion mask (such as Kratos hands or on water), so there is nothing that can prevent ghosting from happenning on these parts expept for developer's provided masks. Overall that looks like a bunch of fragile hacks to me, these hacks can fix certain things, but break others, so games have to be carefully tuned on a case by case basis. Instead of building 1000 heuristics to handle different scenes in games, you can train a neural net. Understanding what is going on in frame, what parts of image to blend and what parts are better not touch is a key to quality and ease of integration and neural nets have proven to be the right tool for the job (vs handcrafted expert-based algorithms).
I'm not saying they're not more advanced than usual forms of engine/in-game TAAU, just that they're still inherently the same thing. Using or not using neural nets makes no difference to it.
 
just that they're still inherently the same thing
All temporal technics are the same, they all accumulate pixels over time, the same goes for checkerboarding or interlaced rendering, just input layouts are different.

Using or not using neural nets makes no difference to it.
It makes a huge difference to quality that you can get out of the accumulation.
 
All temporal technics are the same, they all accumulate pixels over time, the same goes for checkerboarding or interlaced rendering, just input layouts are different.


It makes a huge difference to quality that you can get out of the accumulation.
It can make a difference in quality, but it doesn't turn DLSS into something other than yet another form of temporal upscaling (with AA). That's where I draw my line anyway, spatial upscalers is another basket, while I would see for example checkerboarding being rather another form of interlacing than anything else (even if there's accumulation over more than just previous frame)
 
It can make a difference in quality, but it doesn't turn DLSS into something other than yet another form of temporal upscaling (with AA). That's where I draw my line anyway, spatial upscalers is another basket, while I would see for example checkerboarding being rather another form of interlacing than anything else (even if there's accumulation over more than just previous frame)
"Interlacing" is also a temporal upscaling. If you consider any method in which a final frame is built from parts of a current and previous frame(s) then pretty much all of these are "the same".

But I feel that this is too generic of an approach the only purpose of which is to diminish the advantages of one temporal reconstruction technique over another.
 
Now FSR 2.0 released as open-source. FSR 1.0 NEVER got any improvements since it's source code release 1 year ago, But can we expect different situation for FSR 2.0?
Will it improve over time thanks to collective intelligence?
 
Agree. The point is all these technics produce results of different quality, so there is no sense in fixating on the "temporal upscaling" semantics when comparing them.

Digital Foundry explained it before multiple times, they are totally different technologies with different results and in special different future-proofings. Intel and NV both seem to go the same routes whilest AMD isnt as of yet, but they suspect they will follow suit, for RT aswell.
 
"Interlacing" is also a temporal upscaling. If you consider any method in which a final frame is built from parts of a current and previous frame(s) then pretty much all of these are "the same".

But I feel that this is too generic of an approach the only purpose of which is to diminish the advantages of one temporal reconstruction technique over another.

On the contrary I think recognizing that all of these techniques are attempting to solve the same problem makes it much easier to discuss the advantages.

It’s only results that matter not the complexity of the implementation. Checkerboarding, spatial, temporal upscaling achieve varying levels of success in approximating or improving on a “native” image.

DLSS isn’t better because it’s fancier. It’s only better because the results are better.
 
On the contrary I think recognizing that all of these techniques are attempting to solve the same problem makes it much easier to discuss the advantages.
That's not what is happening though.
The approach from the problem being solved is valid since in such approach it is theoretically possible that some purely spatial technique could produce better results than a state of the art temporal one. It is also allowing us to compare all possible solutions, without dividing them into "bad" and "good" groups from the start.
But the approach from bundling all upscaling techniques into "technology groups" is pointless since there are no clear inherent benefits to any technology here. This applies not only to "spatial" vs "temporal" distinction btw but also to "algorithmic" vs "ML/AI-based" as well. Saying that "oh, these are all the same TAAU" completely misses the point - and actual reality in which said techniques may in fact be mixed on a user rated perception scale. A simple example here is FSR1 which is a really simple and purely spatial upscaling technique - which is generally considered to provide better (perf/qual) results than DLSS1 - a much more advanced technique with temporal and AI components.

There's also a little issue with calling DLSS2 "TAAU" btw - we don't really know what DLSS2 does to an image. The fact that it takes "TAAU-like" inputs doesn't mean that there are no purely spatial processing happening.
Same is true for FSR2 btw, which has RCAS in it, which is a purely spatial sharpener.
 
That's not what is happening though.
The approach from the problem being solved is valid since in such approach it is theoretically possible that some purely spatial technique could produce better results than a state of the art temporal one. It is also allowing us to compare all possible solutions, without dividing them into "bad" and "good" groups from the start.
But the approach from bundling all upscaling techniques into "technology groups" is pointless since there are no clear inherent benefits to any technology here. This applies not only to "spatial" vs "temporal" distinction btw but also to "algorithmic" vs "ML/AI-based" as well. Saying that "oh, these are all the same TAAU" completely misses the point - and actual reality in which said techniques may in fact be mixed on a user rated perception scale. A simple example here is FSR1 which is a really simple and purely spatial upscaling technique - which is generally considered to provide better (perf/qual) results than DLSS1 - a much more advanced technique with temporal and AI components.

There's also a little issue with calling DLSS2 "TAAU" btw - we don't really know what DLSS2 does to an image. The fact that it takes "TAAU-like" inputs doesn't mean that there are no purely spatial processing happening.
Same is true for FSR2 btw, which has RCAS in it, which is a purely spatial sharpener.

I’m not really following. If we evaluate all of these techniques on their merits then all those other details are just noise.

The debate on which techniques are inherently superior is interesting but ultimately pointless.
 
I’m not really following. If we evaluate all of these techniques on their merits then all those other details are just noise.

The debate on which techniques are inherently superior is interesting but ultimately pointless.
My point is that all upscaling techniques "compete" with each other in the same way as all antialiasing techniques compete with each other. There is zero reason to "bundle" them into "types" because this doesn't make anything clearer or better.
 
My point is that all upscaling techniques "compete" with each other in the same way as all antialiasing techniques compete with each other. There is zero reason to "bundle" them into "types" because this doesn't make anything clearer or better.

I agree.
 
FSR2.0 for Metro Exodus , Death Stranding , Marvel’s Guardians of the Galaxy :



FSR2.0 for Red Dead Redemption 2 :

 
Back
Top