AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
AMD to Debut Live FSR tech on June 22, With the release of the next-gen patches for XSX and PS5 consoles, that both make use of the the AMD FSR tech to run CP2077.
Showcasing 4K 60P Raytraced graphics on the Latest Consoles.

Thats what I want to hear on the 22nd!
I mean I know I'm probably dreaming, but it would be nice.

You could get 4K 60 FPS with DXR in CP2077...only question will be image quality...as the console are unpowered compared to a PC....like it or not.
And once again...no temporal data = not a DLSS competitor...even though some will have you think so.

I predict worse/around performance with lesser image quality than with DLSS 1.0...
 
RIS wasn't competition at all... let's be honest.

The reason why FSR exists... is because RIS was not competition, it was a sharpening filter, not an upscaler.

Why not? It looked better than DLSS 1.0 and worked for a lot of games too. I had a RTX 2080 from day one and 90% of the games I played didn't has DLSS, and the one that had was the 1.0 version that looked worse than simple upscaling + RIS; let's be honest.

FX CAS has an upscaler integrated.

I see a lot of NVIDIA users wanting FRS to fail.

I wonder whether you anticipate FSR winning over the naive scaling from higher resolution with RIS on top of that?) Because that's not going to happen. Higher resolutions simply have more texture details unless there is temporal reconstruction in the play. So the real question is whether FSR fast enough to not lag behind the native scaling from higher resolution.

You can use both though.
 
You can use both though.
His point is that naive scaling up from a higher resolution will most likely win over FSR scaling up from lower resolution - with or without RIS on top of it - when it comes to quality.
I think that FSR will be really fast, likely not a lot slower than straight bilinear, so such comparisons won't make much sense.
 
You could get 4K 60 FPS with DXR in CP2077...only question will be image quality...as the console are unpowered compared to a PC....like it or not.
And once again...no temporal data = not a DLSS competitor...even though some will have you think so.

I predict worse/around performance with lesser image quality than with DLSS 1.0...
Worse quality than DLSS 1.0 is highly unlikely since RIS/CAS upscaling already looked better than that. I doubt AMD would even bother releasing something that doesn't even surpass their earlier tech.
 
Last edited:
Do we know anything about FSR?

Is it pure upscaler or is there AA component as well?
Everyone seems to compare it to Upscaler+AA methods. (Which very much could still be the case.)

The demonstration had quite stable image quality, so it might have run after the TAA.
Just running spatial upscaler/AA before any AA, may give quite lot of shimmering especially if content is designed TAA in mind,
 
I doubt AMD would even bother releasing something that doesn't even surpass their earlier tech.
This all comes down to math. With spatial upscaling texture quality depends only on the number of real rendered pixels, the more you have, the better the quality of textures is because GPUs keep 1 to 1 pixels to texels ratio at distant mip levels which are usually right in the center of screen and into your face :)
And this is exactly the reason why simple sharpenning of higher resolution image managed to beat DLSS 1.0 because it had higher texture detalization to begin with, not because of some silly sharpenning applied on top of upscaling.
So whether or not AMD's shiny new spatial upscaling will be able to beat more common bicubic, gaussian or Lanczos upscaling + sharpenning depends only on how fast this new upscaling is.
If it takes 2-3 ms, it won't be able to beat higher resolution with basic bicubic upscaling + sharpenning for the very same reason why DLSS1.0 wasn't able to do this.
 
With TAAU and at least two console devs with their own engine already having rolled their own motion compensated upscaling it seems a bit silly to do anything purely spatial (ie. Insomniac and Bluepoint). If you don't have the mindshare to get developers to put anything more complex into their games, then just don't bother ... just put programmers on making a tweaked TAAU specifically accelerated for Radeon which developers can plugin. Maybe throw something out there for Unity for show with no hope of it getting included, because it takes developer time and AMD isn't willing to pay for it or embed their own programmers.
 
With TAAU and at least two console devs with their own engine already having rolled their own motion compensated upscaling it seems a bit silly to do anything purely spatial (ie. Insomniac and Bluepoint). If you don't have the mindshare to get developers to put anything more complex into their games, then just don't bother ... just put programmers on making a tweaked TAAU specifically accelerated for Radeon which developers can plugin. Maybe throw something out there for Unity for show with no hope of it getting included, because it takes developer time and AMD isn't willing to pay for it or embed their own programmers.

Indeed, it may be a literal case of too little too late. But there’s still time for us to be surprised.
 
Indeed, it may be a literal case of too little too late. But there’s still time for us to be surprised.
I am being medium optimistic here. While you can't reconstruct texture details with spatial upscaling for obvious reasons (you can't reliably generate them even with SOTA methods such as GANs because probability space is just too large), you can still attack the jaggied geometry edges and transperent parts of textures even with the simplest methods such as depth aware upscaling, just render depth at higher res and clamp color to it (this is actually a common technic for alpha blending, low res post-processing, etc). When you think about it, even morphological AA methods can be extended to upres image since they decompose edges into simple shapes and revectorize them, you can certainly use the same method to create higher resolution appearance of geometry edges. There are other ways you can solve the edge scaling problem - with neral nets such as this one (works only with already anti aliased image), or by neural network simply generating unsharp mask or something along this line as on SHIELD TV. In other words, there is a lot of place for research and better spatial upscaling would certainly help smooth out corner cases for DLSS and other temporal super resolution technics (since it's an inherent part of them), such as the aliased edges for TSR in UE5 when you rotate camera around character for example.
 
Last edited:
I am being medium optimistic here. While you can't reconstruct texture details with spatial upscaling for obvious reasons (you can't reliably generate them even with SOTA methods such as GANs because probability space is just too large), you can still attack the jaggied geometry edges and transperent parts of textures even with the simplest methods such as depth aware upscaling, just render depth at higher res and clamp color to it (this is actually a common technic for alpha blending, low res post-processing, etc). When you think about it, even morphological AA methods can be extended to upres image since they decompose edges into simple shapes and revectorize them, you can certainly use the same method to create higher resolution appearance of geometry edges. There are other ways you can solve the edge scaling problem - with neral nets such as this one (works only with already anti aliased image), or by neural network simply generating unsharp mask or something along this line as on SHIELD TV. In other words, there is a lot of place for research and better spatial upscaling would certainly help smooth out corner cases for DLSS and other temporal super resolution technics (since it's an inherent part of them), such as the aliased edges for TSR in UE5 when you rotate camera around character for example.
The big problem with morphological AA methods is temporal stability. While AAed edges can look fantastic in still frames they jump all over the place in motion. I don't think it's a solvable problem without either blurring the signal a lot or using a temporal filter, like TAA.

I wouldn't be surprised if this new spatial filter has been designed to work in tandem with a pre-existing TAA pass. Most games use TAA already anyway..
 
The big problem with morphological AA methods is temporal stability.
Sure, morphological AA is attached to the binary no AA raster grid after all, but nobody suggests using it by alone.
I'd love to see it applied before temporal accumulation kicks in in TSR, TAAU, DLSS, etc so that when temporal accumulation fails you wouldn't see aliased low res pixel grid.
 
Sure, morphological AA is attached to the binary no AA raster grid after all, but nobody suggests using it by alone.
I'd love to see it applied before temporal accumulation kicks in in TSR, TAAU, DLSS, etc so that when temporal accumulation fails you wouldn't see aliased low res pixel grid.
Edge AA with temporal methods tends to converge so rapidly that in practice one cannot almost never see it happening, i.e. morphological AA doesn't really buy you that much with temporal methods.

In fact it can be counterproductive as it prevents the temporal integration to generate an unbiased & consistent result. For instance, in my own experience running FXAA right before TAA often looks worse than just applying TAA alone.
 
Edge AA with temporal methods tends to converge so rapidly that in practice one cannot almost never see it happening
I wish this was 100% true, but that's not always the case. Motion vectors are debugged as hell in some popular engines, such as UE4 so it's hard to find flaws (the places where temporal accumulation fails due to bad/non existent motion vectors), unfortunately that's not always the case.
Even if all engines had flawless motion vectors, it would be still a nice idea to have a "B" plan for disocclusion parts of the image, corner cases, etc.

In fact it can be counterproductive as it prevents the temporal integration to generate an unbiased & consistent result.
It depends, it can as well accelerate convergence and prevent overblurring caused by constant image resampling if it is optimized for that.
If you compare something like SMAA 2x with other temporal AA, you will quickly spot that unlike other TAA, the SMAA 2x can converge to temporally stable image just in 2 frames, not in 8 or more frames as with TAA, hence there are less ghosting issues with SMAA 2x and in general it looks superior to the most TAA out of here.
The only difference between your usual TAA and SMAA 2x is that SMAA 2x does morphological prefiltering before accumulating frames.

For instance, in my own experience running FXAA right before TAA often looks worse than just applying TAA alone.
Yes, running overblurring FXAA with even more overblurring TAA is a bad idea in general, because by doing that you would accumulate even more blur across TAA window (8 frames usually).
 
AMD FidelityFX Super Resolution coming to 7 games at launch

AMD-FidelityFXSS-Support-Games.png
 
So I'm guessing AMD will champion Resident Evil Village and Far Cry 6 to promote FSR's release, which is not so bad IMO.

There's a nice selection of developers / engines in that list, which does promote the idea that it's easy to implement, but it's definitely lacking some bigger names.
Scoring a super-high-profile title like e.g. Battlefield 6 would have been better, but I don't know if this time DICE has their own in-engine upscaler like TSR on UE5.


Regardless, we're less than a week away from the full reveal, so hopefully most questions about implementation, adoption, compatibility with 3rd-party temporal upscalers, etc. will be answered next tuesday.
 
Back
Top