AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Without any temporal filtering the amount of flickering at lower res, magnified by the spatial filter, is often severe. With TAA enable to address these problems ghosting will re-enter into the equation. There’s no free lunch, as usual.

Never say never but I still have to see a temporally stable/consistent superres filter that is also spatial only, without the help of a temporal filter. Said that, I am ready to be surprised :)

I realize that there is no free lunch so I take the pessimist approach instead but what if we just accept having less performance gains altogether so we can run a higher rendering resolution for the spatial only filter ? I'm curious as to how much higher the internal resolution would have to be for a spatial-only filter to get a comparable result against a spatial/temporal filter ?

I could see that trading in temporal data for more robustness and easier integration being just as rationally valid option ...
 
I realize that there is no free lunch so I take the pessimist approach instead but what if we just accept having less performance gains altogether so we can run a higher rendering resolution for the spatial only filter ? I'm curious as to how much higher the internal resolution would have to be for a spatial-only filter to get a comparable result against a spatial/temporal filter ?

I could see that trading in temporal data for more robustness and easier integration being just as rationally valid option ...
Games without TAA look rather bad due to excessive image instability. I don't want to go back to that noise.
 
In AMD's defense, there's valid reasons to not use temporal data since it's a lower quality source of information which introduces artifacts and requires a more implementation work ...
That's simply wrong. First of all, temporal data is the only source of data which enables image reconstruction - static image will simply converge to the ground truth native res after a few frames of accumulation. Sum of pixels in high res space is equal to sum of pixels in low res space over certain amount of frames with camera jittering in high res space.
Single frame upscaling is not match for a higher res image and will never converge to higher resolution because texel density on distant texture MIP levels gets higher in higher resolutions. There is simply less texture details in lower res image (texel to pixel density is 1 to 1 on distant textures in modern games) and you can't extract these texture details without rendering higher res image over time with temporal or over space with higher resolution rendering.

The second wrong assumption is that upscaling will work by itself. As people here have already noted, it is not the case.
Temporal AA or at least morphological AA are still pretty much mandatory for upscaling.
You will still get all temporal artefacts from TAA as well as details erosion in lower resolution space due to neighborhood clipping, or very temporally instable image with morphological AA since spatial upscaling basically amplifies all underlying artifacts and instability in motion.
Also spatial upscaling has no clue about aliasing, so if you try upscaling non AA image you will only get even more pronounceable aliasing, so prefiltering is mandatory, full stop.
 
That's simply wrong. First of all, temporal data is the only source of data which enables image reconstruction - static image will simply converge to the ground truth native res after a few frames of accumulation. Sum of pixels in high res space is equal to sum of pixels in low res space over certain amount of frames with camera jittering in high res space.
Single frame upscaling is not match for a higher res image and will never converge to higher resolution because texel density on distant texture MIP levels gets higher in higher resolutions. There is simply less texture details in lower res image (texel to pixel density is 1 to 1 on distant textures in modern games) and you can't extract these texture details without rendering higher res image over time with temporal or over space with higher resolution rendering.

Temporal data and image reconstruction are mutually exclusive concepts. The latter doesn't necessarily imply that we need the former. The bolded was my proposition to begin with ...

The second wrong assumption is that upscaling will work by itself. As people here have already noted, it is not the case.
Temporal AA or at least morphological AA are still pretty much mandatory for upscaling.
You will still get all temporal artefacts from TAA as well as details erosion in lower resolution space due to neighborhood clipping, or very temporally instable image with morphological AA since spatial upscaling basically amplifies all underlying artifacts and instability in motion.
Also spatial upscaling has no clue about aliasing, so if you try upscaling non AA image you will only get even more pronounceable aliasing, so prefiltering is mandatory, full stop.

I don't think I've ever assumed that upscaling will "work by itself". Every filter out there have their own unique heuristic for that process.

Ultimately, neither spatial or temporal upscaling truly addresses the underlying issue of aliasing. Temporal methods are really just clever hacks to accumulate more samples from the previous frames which aren't totally reliable since revealing occluded objects in a scene will show aliasing and subpixel triangles flickering is another failure case as well. Prefiltering methods are crazy expensive as well so I don't see how they'd be "mandatory" in that sense over just rendering a higher resolution image for the spatial filter ...
 
I think that's what they're trying to change with the pre-trained algorithms.

How ? ML is just another heuristic or means of doing image reconstruction. Temporal data is just an input for our algorithms which doesn't necessarily have to be included for image reconstruction ?
 
Temporal data and image reconstruction are mutually exclusive concepts.
Temporal data enables image reconstruction over time, since when are they mutually exclusive?
Does anyone working on path tracing, temporal filtering, etc know that their attempts at gathering samples over time are all wrong?

The latter doesn't necessarily imply that we need the former
We are talking on very specific topic here and that's exactly why for image reconstruction we need temporal data.

I don't think I've ever assumed that upscaling will "work by itself"
You said "there's valid reasons to not use temporal data since it's a lower quality source of information which introduces artifacts and requires a more implementation work ..."
It won't work without TAA or morphological AA. And it's not a temporal component which introduces artifacts, it's resampling and occlusion.

temporal upscaling truly addresses the underlying issue of aliasing
Temporal adresses the issue of aliasing by gathering more samples over time and by doing better sampling via preudo random distributions, such as Holton, which work better in comparison with regular ordered pixel grid (the same reason why rotated grid MSAA was better than ordered grid on slope lines).

Prefiltering methods are crazy expensive as well so I don't see how they'd be "mandatory" in that sense over just rendering a higher resolution image for the spatial filter ...
By prefiltering I meant TAA or morphological AA which should always be enabled before spatial upscaling if you don't accumulate samples in higher res space over time.
 
I'm almost certain there will be multiple variations of FSR in the future: FSR 1.0 and FSR 2.0.

FSR 2.0 could use machine learning to boost image quality way beyond FSR 1.0, but it would only run on RDNA2 and up cards, as it would make use of INT calculations not present on RDNA1 and earlier cards.
 
I'm almost certain there will be multiple variations of FSR in the future: FSR 1.0 and FSR 2.0.

FSR 2.0 could use machine learning to boost image quality way beyond FSR 1.0, but it would only run on RDNA2 and up cards, as it would make use of INT calculations not present on RDNA1 and earlier cards.
Fast INT4 and INT8 are also present in Vega20 (Radeon VII on consumer side) as well as RDNA"1.1"/Navi12 (Mac stuff on consumer side)1
 
Fast INT4 and INT8 are also present in Vega20 (Radeon VII on consumer side) as well as RDNA"1.1"/Navi12 (Mac stuff on consumer side)1
More importantly all GPUs since Vega 64 have fast FP16 which I think would be a lot more beneficial to a pure spatial upscaling approach than INTs.
It's also fairly likely that whatever can be ran on RDNA2 fast INTs can be run without them on a card which support the same shader feature set. It will be slower but likely not so to become useless.
 
Temporal data enables image reconstruction over time, since when are they mutually exclusive?
Does anyone working on path tracing, temporal filtering, etc know that their attempts at gathering samples over time are all wrong?

We are talking on very specific topic here and that's exactly why for image reconstruction we need temporal data.

You can do image reconstruction without temporal data!

You said "there's valid reasons to not use temporal data since it's a lower quality source of information which introduces artifacts and requires a more implementation work ..."
It won't work without TAA or morphological AA. And it's not a temporal component which introduces artifacts, it's resampling and occlusion.

A spatial-only filter still has a lower barrier to entry compared to any temporal filter if that's what you were referring to in my post and that's not going to change ...

Temporal adresses the issue of aliasing by gathering more samples over time and by doing better sampling via preudo random distributions, such as Holton, which work better in comparison with regular ordered pixel grid (the same reason why rotated grid MSAA was better than ordered grid on slope lines).

Temporal has a side effect of helping aliasing but it doesn't let you sample the geometry/shading at higher rates or apply analytic methods. An under sampled image with temporal data is still an under sampled image ...
 
How ? ML is just another heuristic or means of doing image reconstruction. Temporal data is just an input for our algorithms which doesn't necessarily have to be included for image reconstruction ?
In layman's terms (since I am one): When you (r algorithm) cannot decide with certainty, if that pixel (as part of a telegraph line for example, blinking in an out of rasterized screenspace) has to be black (line) or (blue) sky, it can look at another point in time (temporal) and see, if this black line is there too or if it's an artifact to be discarded. And even more laymannish: If your algorithm has learned, that there's such a thing as "a black line over blue background", comparisons with older frames can help to decide if this pixel is part of such a case.
 
You can do image reconstruction without temporal data!
Of cause you can by rendering higher resolution depth, but this can only help with edges. You can also render other buffers in high resolution, but there will be diminishing returns in performance then.

A spatial-only filter still has a lower barrier to entry compared to any temporal filter
All I was saying is that nobody would ever use just the spatial-only filter alone, it won't work by alone without TAA being applied before it and then your argument on "not use temporal data since it's a lower quality source of information" fails apart, temporal data will stay here in TAA.

Temporal has a side effect of helping aliasing but it doesn't let you sample the geometry/shading at higher rates or apply analytic methods.
It does exactly this - samples the geometry/shading at higher rates distributed over multiple frames.

An under sampled image with temporal data is still an under sampled image
If camera doesn't move, this undersampled image will converge to higher res in a matter of a few frames. If camera moves, there will be resolution losses in disocluded parts (this can be mitigated to extend by resampling a few history buffers), yet clever post processing can easily hide these losses.
If it wasn't the fact, we would have watched 480p youtube instead of being able to watch 8K compressed videos.

More importantly all GPUs since Vega 64 have fast FP16 which I think would be a lot more beneficial to a pure spatial upscaling approach than INTs.
Sure, how would they upscale image without supporting HDR precision otherwise)
 
In layman's terms (since I am one): When you (r algorithm) cannot decide with certainty, if that pixel (as part of a telegraph line for example, blinking in an out of rasterized screenspace) has to be black (line) or (blue) sky, it can look at another point in time (temporal) and see, if this black line is there too or if it's an artifact to be discarded. And even more laymannish: If your algorithm has learned, that there's such a thing as "a black line over blue background", comparisons with older frames can help to decide if this pixel is part of such a case.

You just described a process which isn't related my question so I'll just rephrase it ...

How is our input data (history/motion vector) supposed to be comparable to our application (image reconstruction) ? Doesn't really make any sense when you think about it ...
 
You just described a process which isn't related my question so I'll just rephrase it ...

How is our input data (history/motion vector) supposed to be comparable to our application (image reconstruction) ? Doesn't really make any sense when you think about it ...

You don't sample the same point every frame. E.g. upscaling 1080p to 4k, you need to turn 1 pixel into 4. Instead of sampling the middle of the pixel, over 4 frames you sample the top left, top right, bottom right, bottom left, where you would have sampled for 4k (though in reality I think they follow the same pattern as MSAA). You then combine these samples - if nothing's moving, you've perfectly reconstructed the 4k image. But things do move, which is where motion vectors and AI reconstruction come into play.
 
You just described a process which isn't related my question so I'll just rephrase it ...

How is our input data (history/motion vector) supposed to be comparable to our application (image reconstruction) ? Doesn't really make any sense when you think about it ...
Maybe I misinterpreted your "mutually exclusive" for "mutually excluding".

I wanted to point out, that one can profit from the other. They can of course both work independently, one without the other, of course.
 

Meaning Nvidia needs to provide developers with an optimized build of FSR? Fat chance of that happening. I’ll be shocked if they lift a finger.

Given that it’s open source hopefully we get the low down soon on how FSR improves on other spatial upscaling algos.

Raja's comment is interesting. He refers to Xe's DL capabilities, is there a DL element to FSR?

 
Last edited:
Meaning Nvidia needs to provide developers with an optimized build of FSR? Fat chance of that happening. I’ll be shocked if they lift a finger.

Given that it’s open source hopefully we get the low down soon on how FSR improves on other spatial upscaling algos.
I'm sure Nv will provide FSR optimization tips or even an optimized version of code to those devs who will ask them. Beyond that though I can't see them giving an f about FSR.
 
Meaning Nvidia needs to provide developers with an optimized build of FSR? Fat chance of that happening. I’ll be shocked if they lift a finger.

Given that it’s open source hopefully we get the low down soon on how FSR improves on other spatial upscaling algos.

Raja's comment is interesting. He refers to Xe's DL capabilities, is there a DL element to FSR?

Fast math on lower precisions is considered "DL capabilities", could be referring to FSR taking advantage of that.
 
Back
Top