AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
It's in the quote I gave ... he quoted Intel from some document. I remember reading this as well in one of Intel's articles.
I think without direct quote from Intel it might be just something lost in translation rather referring to "best version" instead of best quality. I'm trying to browse through XeSS slides from GDC and so far haven't seen anything indicating difference in quality
 
I think without direct quote from Intel it might be just something lost in translation rather referring to "best version" instead of best quality. I'm trying to browse through XeSS slides from GDC and so far haven't seen anything indicating difference in quality

This??

we also came up with another innovation to enable XeSS on a broad set of hardware, including our competition, with a smart quality performance trade-off.
 
Thanks, though still it it doesn't outright say the output would be any different on DP4a, as "quality performance tradeoff" could refer to something as simple as available modes / presets due performance differences between the two. Sure it's possible there's a quality difference in output too and thus two completely separate scaling modes under one name, but for example the GDC presentation which talks about DP4a version too doesn't mention anything suggesting such (or my brains are getting too tired, 2 hours 'till morning shift comes in to let me go home)
 
I would think DLSS could still provide some benefit on GTX GPUs since they support DP4 acceleration. It is not in Nvidia’s interest to do this obviously.
 
The only real way to be "vendor neutral" going forward would be if the game provides all three.

All three are UE plugins, so that's a fair chunk of future games that could have all as options.

From my naive view, since both XeSS and FSR2 both use minimal TAA setup as input, won't most engines end up with at least one of them?
 
If TSR produces very similar/slightly superior results to FSR 2, is there any benefit in allocating development resources for both in an UnReal game development budget?
 
AMD updated their GPUOpen article today with new uncompressed comparison screenshots

https://gpuopen.com/fidelityfx-superresolution-2/#comparison

Machine Learning (ML) is not a prerequisite to achieving good quality image upscaling. Often, ML-based real-time temporal upscalers use the model learned solely to decide how to combine previous history samples to generate the upscaled image: there is typically no actual generation of new features from recognizing shapes or objects in the scene. AMD engineers leveraged their world-class expertise to research, develop and optimize a set of advanced hand-coded algorithms that map such relationships from the source and its historical data to upscaled resolution.

The FidelityFX Super Resolution 2.0 analytical approach can provide advantages compared to ML solutions, such as more control to cater to a range of different scenarios, and a better ability to optimize. Above all, not requiring dedicated ML hardware means that more platforms can benefit, and more gamers will be able to experience FSR 2.0.
How much of this statement is reality and how much is PR speak? Does DLSS use ML just to decide how to combine previous history samples or is it deeper than that?
 
Does DLSS use ML just to decide how to combine previous history samples or is it deeper than that?
We don't know (yet) due to proprietary nature of DLSS, but I saw few comments from some devs stating similar things previously on twitter. So I assume there is no magic and ML is used just like that and they are using tensor cores for some kind of acceleration.
 
We don't know (yet) due to proprietary nature of DLSS, but I saw few comments from some devs stating similar things previously on twitter. So I assume there is no magic and ML is used just like that and they are using tensor cores for some kind of acceleration.

When making claims like these, its a good idea to back them up with some solid evidence.
 
It's not like there's solid evidence for the opposite either, why wouldn't claims of AI magic need that solid evidence?

I understand where you're coming from, but that also opens the doors for basically disqualifying any other technology aswell. NV could be lying, so could be Sony or anyone else.
 
I understand where you're coming from, but that also opens the doors for basically disqualifying any other technology aswell. NV could be lying, so could be Sony or anyone else.
Not really. There's literally nothing saying there's AI magic. NVIDIA for sure hasn't specified it would be some magic instead of what AMD claims, they only say they're using "AI" and in their terminology that's correct either way.
 
Yeah in the aspect that DLSS is a sample trained algorithm with inference being used to determine changes to frames for a final image makes it an "ML" process. However AMD want to distinguish their method compared to DLSS, I hardly think they can make the claims they are.
 
Yeah in the aspect that DLSS is a sample trained algorithm with inference being used to determine changes to frames for a final image makes it an "ML" process. However AMD want to distinguish their method compared to DLSS, I hardly think they can make the claims they are.
Nvidia's own presentation indicates that ML is just used to more intelligently reject samples from previous frames.

https://www.gdcvault.com/play/1026697/DLSS-Image-Reconstruction-for-Real

(See from 37:20)
 
At the
Yeah in the aspect that DLSS is a sample trained algorithm with inference being used to determine changes to frames for a final image makes it an "ML" process. However AMD want to distinguish their method compared to DLSS, I hardly think they can make the claims they are.
At the end, visual quality (both still and in motion) will be the judge. Up to now, FSR2.0 seems a bit behind DLSS 2.3/2.4 (in the limited 4k previous samples provided by AMD) but I didn't check the last samples tho.
Only few more weeks to wait before we put these claims to the test...
 
Back
Top