AMD FSR antialiasing discussion

Discussion in 'Architecture and Products' started by Deleted member 90741, May 20, 2021.

  1. Kaotik

    Kaotik Drunk Member Legend

    I think without direct quote from Intel it might be just something lost in translation rather referring to "best version" instead of best quality. I'm trying to browse through XeSS slides from GDC and so far haven't seen anything indicating difference in quality
     
    BRiT likes this.
  2. TopSpoiler

    TopSpoiler Newcomer

    This??


     
  3. Kaotik

    Kaotik Drunk Member Legend

    Thanks, though still it it doesn't outright say the output would be any different on DP4a, as "quality performance tradeoff" could refer to something as simple as available modes / presets due performance differences between the two. Sure it's possible there's a quality difference in output too and thus two completely separate scaling modes under one name, but for example the GDC presentation which talks about DP4a version too doesn't mention anything suggesting such (or my brains are getting too tired, 2 hours 'till morning shift comes in to let me go home)
     
    BRiT likes this.
  4. arandomguy

    arandomguy Regular Newcomer

    Let's be honest, ultimately the reality is likely going to be that FSR 2.0 is best for AMD users, XeSS is best for Intel users, and DLSS is best for Nvidia users.

    The only real way to be "vendor neutral" going forward would be if the game provides all three.
     
  5. techuse

    techuse Veteran

    I would think DLSS could still provide some benefit on GTX GPUs since they support DP4 acceleration. It is not in Nvidia’s interest to do this obviously.
     
  6. cheapchips

    cheapchips Veteran

    All three are UE plugins, so that's a fair chunk of future games that could have all as options.

    From my naive view, since both XeSS and FSR2 both use minimal TAA setup as input, won't most engines end up with at least one of them?
     
  7. pharma

    pharma Veteran

    If TSR produces very similar/slightly superior results to FSR 2, is there any benefit in allocating development resources for both in an UnReal game development budget?
     
    PSman1700 likes this.
  8. troyan

    troyan Regular

    Not for UE4 and 5 games. And TAAU will be faster on older GPUs like Pascal, there isnt a need for an expansive compute pass running in FP32.
     
    PSman1700 likes this.
  9. Krteq

    Krteq Newcomer

    digitalwanderer and BRiT like this.
  10. digitalwanderer

    digitalwanderer Dangerously Mirthful Legend

    How much of this statement is reality and how much is PR speak? Does DLSS use ML just to decide how to combine previous history samples or is it deeper than that?
     
  11. Krteq

    Krteq Newcomer

    We don't know (yet) due to proprietary nature of DLSS, but I saw few comments from some devs stating similar things previously on twitter. So I assume there is no magic and ML is used just like that and they are using tensor cores for some kind of acceleration.
     
  12. PSman1700

    PSman1700 Legend

    When making claims like these, its a good idea to back them up with some solid evidence.
     
    pharma likes this.
  13. Kaotik

    Kaotik Drunk Member Legend

    It's not like there's solid evidence for the opposite either, why wouldn't claims of AI magic need that solid evidence?
     
    Krteq likes this.
  14. PSman1700

    PSman1700 Legend

    I understand where you're coming from, but that also opens the doors for basically disqualifying any other technology aswell. NV could be lying, so could be Sony or anyone else.
     
  15. Kaotik

    Kaotik Drunk Member Legend

    Not really. There's literally nothing saying there's AI magic. NVIDIA for sure hasn't specified it would be some magic instead of what AMD claims, they only say they're using "AI" and in their terminology that's correct either way.
     
  16. Malo

    Malo Yak Mechanicum Legend Subscriber

    Yeah in the aspect that DLSS is a sample trained algorithm with inference being used to determine changes to frames for a final image makes it an "ML" process. However AMD want to distinguish their method compared to DLSS, I hardly think they can make the claims they are.
     
    pharma likes this.
  17. Subtlesnake

    Subtlesnake Regular

    Nvidia's own presentation indicates that ML is just used to more intelligently reject samples from previous frames.

    https://www.gdcvault.com/play/1026697/DLSS-Image-Reconstruction-for-Real

    (See from 37:20)
     
    BRiT, Kaotik, Krteq and 1 other person like this.
  18. xpea

    xpea Regular

    At the
    At the end, visual quality (both still and in motion) will be the judge. Up to now, FSR2.0 seems a bit behind DLSS 2.3/2.4 (in the limited 4k previous samples provided by AMD) but I didn't check the last samples tho.
    Only few more weeks to wait before we put these claims to the test...
     
    pharma and PSman1700 like this.
  19. PSman1700

    PSman1700 Legend

  20. Kaotik

    Kaotik Drunk Member Legend

    Yes, however we have no clue if it's actually necessary or would the process be fast enough without. XeSS might give us a clue.
     
Loading...

Share This Page

Loading...