AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Sorry but that is plain stupid. The fact that the network is the one that adds sharpening ( or creates a certain sharpness level ) means nothing. It is additional sharpening of the image. If DLSS has a sharpening pass, it is does the same thing FSR is doing.

The network doesnt add sharpening it is upscaling the image with a certain level of sharpness. It has no clue how the TAA modified native rendering looks.

The idea that you can improve the native or other upscaling method and then compare it with the upscaler you dislike, is new. Why not writing some new motion vectors for UE5 TAAU and then compare it with DLSS and prove DLSS is not so great because it has more ghosting than your modified TAAU. :)

No but every bilinear upscaled image with RCAS will be look better than a FSR upscaled image without it.
 
Do you think Alex did this video in 1 day?

This is getting more and more ridiculous.

And all that "AMD recommends against usinig Performance" stuff is just hilarious.

Of cause they do, that's a spatial upscaling after all.

Have you tried reading the "A Survey of Temporal Antialiasing Techniques" paper?
I saw people already suggested you reading that.

It's really painful to see how you guys have no basic knowledge of image processing methods, yet accusing Alex of some really stupid stuff you find appropriate.

You have to understand the difference between spatial upscaling and temporal accumulation, otherwise there is no point in talking with you.

That paper is written by Nvidia employees. Are you on the payroll too?

But seriously, Performance mode doesn't work well because upscaling in the spatial domain can not create information that doesn't exist. It should be pretty simple for people to understand. Temporal data can fill in those gaps by using information from previous frames. Temporal algorithms will always will over purely spatial ones.

That said, I still think FSR is useful right now because it can be very easily added to games and give people performance options. It's just kind of a dead end to stay purely in the spatial domain, because of the drawbacks Alex was able to highlight. It's why AMD themselves hinted their next step is to use temporal information. They KNOW spatial is not the future of upscaling.
 
Do you think Alex did this video in 1 day?

This is getting more and more ridiculous.

And all that "AMD recommends against usinig Performance" stuff is just hilarious.

Of cause they do, that's a spatial upscaling after all.

Have you tried reading the "A Survey of Temporal Antialiasing Techniques" paper?
I saw people already suggested you reading that.

It's really painful to see how you guys have no basic knowledge of image processing methods, yet accusing Alex of some really stupid stuff you find appropriate.

You have to understand the difference between spatial upscaling and temporal accumulation, otherwise there is no point in talking with you.

I think i understood what the difference is a long time ago. In theory a spatial upscaler will not have the same IQ as a temporal upscaler because it has less data to work with.
But that is in theory. Look when they compared DLSS in the past they always said : For 4k it is ok to use quality and balance. For 1440p only quality will do it and for 1080p even quality is crap. That is before it became better than native even on perf mode, as it is now.

Even if the spatial upscaler has no chance at lower res, maybe it can catch up at higher res, especially if there are some problems with the temporal upscaler. So why not trying the UQ mode to see if it gives you similar IQ as the TAAU and native ( or at least as close as Control DLSS1.9 was from native :) ). Why dismissing it without testing? Plus not all temporal upscalers are perfect, especially when they are not supported in a game so how do you know it won't break at higher qualities compared with FSR which we all know it is perfect?
Why saying that FSR will have the same bugs a temporal solution has since they didn't even have TAAU inside these games? How do you know it will have the same bugs?
 
But seriously, Performance mode doesn't work well because upscaling in the spatial domain can not create information that doesn't exist. It should be pretty simple for people to understand. Temporal data can fill in those gaps by using information from previous frames. Temporal algorithms will always will over purely spatial ones.
How do you know that? A temporal upscaler needs motion vectors by default for example. Yet these games do not support UE4 temporal upscaller. In theory it is better than spatial upscaling but there are good temporal upscalers and there are bad ones.
 
How do you know that? A temporal upscaler needs motion vectors by default for example. Yet these games do not support UE4 temporal upscaller. In theory it is better than spatial upscaling but there are good temporal upscalers and there are bad ones.

A temporal upscale has more samples available to upscale from. Consider each pixel 1 sample, then you have x samples per frame. Temporal upscale can have 2x, 3x, 4x, 5x etc samples to pick from or combine. If you jitter your samples per frame you can have a lot non-identical and viable samples frame to frame. The problem that real-time graphics has is massive under-sampling, which is why temporal can be so useful. It does have drawbacks that need to be considered, and the paper that was linked above is very good at outlining them.
 
A temporal upscale has more samples available to upscale from. Consider each pixel 1 sample, then you have x samples per frame. Temporal upscale can have 2x, 3x, 4x, 5x etc samples to pick from or combine. If you jitter your samples per frame you can have a lot non-identical and viable samples frame to frame. The problem that real-time graphics has is massive under-sampling, which is why temporal can be so useful. It does have drawbacks that need to be considered, and the paper that was linked above is very good at outlining them.
I understand that but a temporal upscale needs code inside the game to work as it needs and even then it needs to be done well. You can use as many samples as you want, if something is broke in the code, everything will become a mess, even in static images, not to mention in motion which is a different problem.
A temporal solution is not by default better than a spatial one. A well done temporal solution is better than any spatial solution.
And again about bugs. A spatial solution will have the same bugs only if it uses the same temporal aliasing. But in these games the temporal upscaling was not even supported, it had to be forced in. So there is no explanation for what Alex did in his review. Yes you can have temporal upscaling much better than FSR...probably you can have them much better than DLSS too in some cases. But compared with some of his DLSS reviews he didn't gave FSR a fair treatment.
 
Didn't told us if he was briefed by Nvidia to insist on the FSR performance or at least if it was briefed by Nvidia how to test Control 1.9 in order to reach the conclusion that it is better than native. :)
The story about Nvidia briefing one day before FSR launch is in the full nerd podcast at 8:15 minute.
See Alex's explanation at 25:58:


The point is that TAAU will always have more information to work with, so will always be able to "reconstruct" more in-texture detail. But the higher the base resolution of FSR, the less detail is lost in the first place, so the closer it will look to TAAU. Hence if you want demonstrate the benefits of TAAU as an algorithm, it makes more sense to compare at lower resolutions, where those benefits are more obviously visible. We should be clear that the perspective Alex was taking was not that of a gamer, but that of a developer asking whether they should include both TAAU and FSR, or whether FSR replaces the former. And Alex is arguing that given the way TAAU and FSR collectively work, it makes sense to include both.
 
See Alex's explanation at 25:58:


The point is that TAAU will always have more information to work with, so will always be able to "reconstruct" more in-texture detail. But the higher the base resolution of FSR, the less detail is lost in the first place, so the closer it will look to TAAU. Hence if you want demonstrate the benefits of TAAU as an algorithm, it makes more sense to compare at lower resolutions, where those benefits are more obviously visible. We should be clear that the perspective Alex was taking was not that of a gamer, but that of a developer asking whether they should include both TAAU and FSR, or whether FSR replaces the former. And Alex is arguing that given the way TAAU colletively work, it makes sense to include both.
Yes but for TAAU to be a thing first it needs to be supported by the game otherwise you will see a lot of bugs once you start playing. And of course it makes sense to include both, it even makes sense to include TAAU and FSR in DLSS games or DLSS in FSR games if you are interested to sell your game.
 
I understand that but a temporal upscale needs code inside the game to work as it needs and even then it needs to be done well. You can use as many samples as you want, if something is broke in the code, everything will become a mess, even in static images, not to mention in motion which is a different problem.
A temporal solution is not by default better than a spatial one. A well done temporal solution is better than any spatial solution.
And again about bugs. A spatial solution will have the same bugs only if it uses the same temporal aliasing. But in these games the temporal upscaling was not even supported, it had to be forced in. So there is no explanation for what Alex did in his review. Yes you can have temporal upscaling much better than FSR...probably you can have them much better than DLSS too in some cases. But compared with some of his DLSS reviews he didn't gave FSR a fair treatment.

All Alex did was highlight the limitations of a purely spatial upscaler. He's not wrong. The lower the internal resolution of the game, the worse the output is going to look relative to other proven solutions like forms of TAAU. It's true that not all games can easily add TAA because they don't have motion vectors etc. That's where FSR fits in well.

And sure, bad TAA can look really bad. That's not really the point. The point is that spatial upscaling has kind of a hard cap on what it can achieve, without something like deep learning to basically "invent" data to fill the gaps. The other option is to use temporal information, which is what FSR "2.0" is likely going to do.
 
And sure, bad TAA can look really bad. That's not really the point. The point is that spatial upscaling has kind of a hard cap on what it can achieve, without something like deep learning to basically "invent" data to fill the gaps. The other option is to use temporal information, which is what FSR "2.0" is likely going to do.

And FSR will amplify every worst part of TAA.
 
All Alex did was highlight the limitations of a purely spatial upscaler. He's not wrong. The lower the internal resolution of the game, the worse the output is going to look relative to other proven solutions like forms of TAAU. It's true that not all games can easily add TAA because they don't have motion vectors etc. That's where FSR fits in well.

And sure, bad TAA can look really bad. That's not really the point. The point is that spatial upscaling has kind of a hard cap on what it can achieve, without something like deep learning to basically "invent" data to fill the gaps. The other option is to use temporal information, which is what FSR "2.0" is likely going to do.
Well i've seen some of you saying that Control 1.9 did not use the deep learning and yet it was the best looking version at that date. So i guess deep learning is not too great. But then we have DLSS 2.0 - 2.2 and we think they are using deep learning for reconstruction and they look much better than Control 1.9 so i guess we can think deep learning is still good.
Do you think most of the image reconstruction in DLSS comes from Deep Learning or from the temporal data? If i was someone passionate about image reconstruction that is a subject i will investigate. How good is DLSS against a well made TAAU. How important is the deep learning thing?


Most of you don't understand why an open source solution is based on spatial upscaling. For TAAU, once you make it open source you can have a lot of bad implementations and many will think it is AMD's fault if their game looks bad upscaled.
For an AI solution it is even worse.
 
@TalEth I would say whatever Epic has come up with for UE5 looks very good and looks like it will be very competitive with DLSS. It's going to be back and forth. Someone will make an advancement in purely analytical or algorithmic (if that's a good thing to call it) upscaling and someone else will come up with an advancement in neural network based upscaling. I think neural networks probably have the highest ceiling in terms of what they can achieve because it's such a new field in computer graphics. There's probably more low hanging fruit to address.

Edit: Also to say DLSS 1.0 wasn't great is probably accurate. To say deep learning isn't great because DLSS 1.0 isn't great is just plain wrong. Deep learning is an immensely large field of research and very new in real-time graphics. Spatial upscaling, on the other hand is much narrow and honestly old. That's not to say there aren't improvements to be had, but it gets harder and harder to make improvements to something where so much research has already been spent. DLSS 1.0, from what I understand, was basically spatial upscaling improved by deep learning, and didn't use temporal information. I could be wrong.
 
@TalEth I would say whatever Epic has come up with for UE5 looks very good and looks like it will be very competitive with DLSS. It's going to be back and forth. Someone will make an advancement in purely analytical or algorithmic (if that's a good thing to call it) upscaling and someone else will come up with an advancement in neural network based upscaling. I think neural networks probably have the highest ceiling in terms of what they can achieve because it's such a new field in computer graphics. There's probably more low hanging fruit to address.

Edit: Also to say DLSS 1.0 wasn't great is probably accurate. To say deep learning isn't great because DLSS 1.0 isn't great is just plain wrong. Deep learning is an immensely large field of research and very new in real-time graphics. Spatial upscaling, on the other hand is much narrow and honestly old. That's not to say there aren't improvements to be had, but it gets harder and harder to make improvements to something where so much research has already been spent. DLSS 1.0, from what I understand, was basically spatial upscaling improved by deep learning, and didn't use temporal information. I could be wrong.

No you are probably right but i told you there is a reason for why an open source solution is only spatial. Not because it is impossible to build a temporal solution or even AI upscaling but because the nature of open source makes the tech implementations harder to control. In theory you can have a better than DLSS open source solution and have the devs ruin it with every implementation. Or as AMD you can have a better than DLSS proprietary solution and have everyone refusing to implement it because of your low marketshare. AMD are not in their best position right now, maybe FSR was the only good move they could have made at this moment. ( even if i doubt that, they are still greedy idiots and instead of selling a lot of cheap videocards they are trying to make tons of money with the 6900xt's while Nvidia starts pushing the 3060 to replace those 1060s and make DLSS marketshare larger ).

In theory DLSS has a big advantage against any other temporal upscaler because it can also use inference for the parts of the image that are missing. But tbh it is disappointing to see the "AI" being unable to fix the temporal bugs in Death Stranding or other games for months and months. That is why i asked, do you think most of the image reconstruction is done through AI or temporal data? For me DLSS looks like a temporal upscaler with very little AI.
 
@TalEth I don't know how we could see that it's "temporal upscaler with very little AI." The fact that DLSS is using temporal data is inseparable from the fact that it's using a convolution neural-network autoencoder. It's goal is to make a low res input look like a super-sampled output. It was trained by taking frames of low resolution motion vectors and low resolution colour data and creating an output that looks similar to a super-sampled reference image. Temporal data is the data that the NN uses. Temporal data is not contrary to the NN. The fact that it would look similar to a good TAAU implementation is not surprising, because they're both trying to produce an upscaled image that looks as close to super-sampled as possible.
 
In theory DLSS has a big advantage against any other temporal upscaler because it can also use inference for the parts of the image that are missing. But tbh it is disappointing to see the "AI" being unable to fix the temporal bugs in Death Stranding or other games for months and months. That is why i asked, do you think most of the image reconstruction is done through AI or temporal data? For me DLSS looks like a temporal upscaler with very little AI.
There's an explanation of how it works here:

https://www.gdcvault.com/play/1026697/DLSS-Image-Reconstruction-for-Real

(See from 17:30 onwards.)

From what I understand, the A.I is not used to infer "details" not present in the original data, but rather to more intelligently combine samples taken over multiple frames, by throwing away less data during the process of "clamping" neighboring pixels to reduce ghosting. So you can resolve more detail while still minimising ghosting.
 
There's an explanation of how it works here:

https://www.gdcvault.com/play/1026697/DLSS-Image-Reconstruction-for-Real

(See from 17:30 onwards.)

From what I understand, the A.I is not used to infer "details" not present in the original data, but rather to more intelligently combine samples taken over multiple frames, by throwing away less data during the process of "clamping" neighboring pixels to reduce ghosting. So you can resolve more detail while still minimising ghosting.
You know if this is all the AI does then Alex also needs to watch that presentation because he thinks machine learning means the AI thinking : " oh this is a wire so it needs to be a full line... " :)
 
Back
Top