AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
So a hypothetical situation where the FSR2 performs similarly to DLSS across many titles with image quality, motion stability and performance is comparable. Why would a developer decide to implement both DLSS and FSR2 when the latter is an open solution that supports all GPUs across several generations?

My initial thought is that likely both could be implemented in parallel with relatively small dev cost to provide choice to the user.

You dont implement DLSS anymore. Everyone are using the same input variables. You just call another libary. nVidia's streamline is the solution to the problem.

I find the ghosting problem of FSR 2.0 more distracting than the DLSS one. Especially when the FSR sharpener is used. Here a afterburner picture from moving: Imgsli
 
They are used for de-noising.

Like techuse said, they aren't. They can't be used for any graphics pipeline using open APIs unless the API decides to add support for them. nVIDIA could potentially use them in open APIs through its own means but would likely incur in frame time costs.
 
Impressive that FSR 2.0 gets so close to DLSS in performance and IQ. Although I recently became an RTX GPU owner, I hope it sees widespread adoption. If it really takes 3 days to implement in DLSS games, I see no reason for it not to.

I wonder how the XeSS DP4A version will compare. If Deathloop is representative of FSR 2.0 in general, it won't be easy to surpass it noticeably and Intel has already admitted that the DP4A version of XeSS is worse compared to the XMX version in performance and quality. Plus, FSR 2.0 can run on a wider range of hardware.
 
It would be something if NV and intels solution was opensource/available to all GPU’s instead of amd’s route. I think the automatic/ML route is less time consuming and probably better for the future. Hand-coded/taau solutions have been around for awhile now.
Intels is/will be open source. Not sure how it's relevant how long something has been around.

My quation is why AMD didnt go the AI/ml route for fsr2.0? If ai route is just as good as hand tuned or close, why not go that route aswell (but works on across all gpu’s).
If it's not better either, why would they?
 
Like tempest core on PS5? Or the PS4 Pro's special ID buffer? We can scream this about other platforms and vendors too.
Yes, "we're using tensor cores to justify their existence better" is also a great excuse for Intel developing two versions of XeSS :rolleyes:
We will see soon how dp4a is on par with XMX version and DLSS. And remember, dp4a should speed up XeLP too!

The only difference is how you pick/weight your samples, predefined algorithm or pretrained neural network.
The difference is not just how you weight samples, but how you can detect content in frame and weight samples according to detected image features.
Autoencoders are capable of extracting way more image features in comparison with handcrafted algorithms, that's why DLSS doesn't need any pixel locking heuristics.
Also, autoencoders are being verified against ground truth data rather than against programmer's feeling of beautiful as often happens with handcrafted heuristics.
 
The fact that FSR 2 launched only in one game is proof enough that it is much more time consuming than DLSS 2 or FSR 1 to implement. AMD is also relying on existing DLSS 2 games to boost their number of FSR 2 titles.
 
To be clear, I'm not saying that ML is not the way to go, just that you may not require tensor cores to infer from the model. There is nothing mandating a trained model to be used by tensor cores. You can use tensor to train and non tensor to infer. At some point is pointless for NVIDIA do add more and more tensor cores and potentially none when generic cores are enough for the task.
 
Intel has already admitted that the DP4A version of XeSS is worse compared to the XMX version in performance and quality.
That's up for debate, they never said that, some interpreted what they said like that. Only thing for sure is performance difference and on their graphs, while DP4a seems to take about twice the time of XMX version, it's still just a fraction of overall frametime (and the graph doesn't actually have any scales included so that "twice the time" is just a guess based on bar length)
edit: The exact quote is "with smart quality performance trade off" which could just as well mean you need to compromise some quality to reach same performance, not that you couldn't get same quality period.

None of these bosos are worth the time, they said FSR 1 Ultra Quality was comparable to DLSS Quality many times in the past, now they claim FSR 2 is a HUGE improvement, and still comparable to DLSS 2, oh really?
Huge enough so it's quality FSR 2.0 that's about the level of DLSS Quality, not Ultra Quality like they thought FSR 1.0 needed.
 
Last edited:
They are used for de-noising.
Nvidia sold that as a use when releasing Turing and they're used in Optix for many rendering applications for de-noising, but as stated, the only use Tensors have currently in gaming is DLSS.
 
Huge enough so it's quality FSR 2.0 that's about the level of DLSS Quality, not Ultra Quality like they thought FSR 1.0 needed.
Ultra Quality FSR 1 is just a gimmik, FSR 1 problems were not corrected by a simple resolution upgrade, they are deeply rooted and tied to the simple spatial upscaling AMD is using.
 
The fact that FSR 2 launched only in one game is proof enough that it is much more time consuming than DLSS 2 or FSR 1 to implement. AMD is also relying on existing DLSS 2 games to boost their number of FSR 2 titles.
No, it's not "proof enough" about anything.
Aside the direct plugin options, FSR 1.0 is far easier to implement than DLSS or FSR 2.0 (and possibly with the plugin version too, not sure if anyone specified if DLSS plugin needs tweaking on game side or is it really universal one click solution)
As for FSR 2.0 vs DLSS, it's impossible to know (except for the direct plugin versions, but there should be ones coming up for FSR 2.0 too for UE/Unity) without some dev spilling the beans. Both require work and support for specific things, building support for one makes implementing the other easier.

ps. DLSS launched with just one game too. XeSS will most likely launch with just one game. Pretty much every new tech out there starts with just one game.
 
To be clear, I'm not saying that ML is not the way to go, just that you may not require tensor cores to infer from the model. There is nothing mandating a trained model to be used by tensor cores. You can use tensor to train and non tensor to infer. At some point is pointless for NVIDIA do add more and more tensor cores and potentially none when generic cores are enough for the task.
It's my understanding, and someone can correct me if I'm wrong, that training requires higher precision computing that is performed in massive offline clusters generally. Tensors are good for inferencing only, that's their primary purpose. Very fast lower precision matrices searches.
So now do you who disliked dlss, want to use fsr now?
People disliked DLSS 1 (understandably). I don't think anyone dislikes DLSS 2, apart from pointing out some imperfections which is understandable. But overall, gamers as a whole consider DLSS2 to be an excellent technology. Enough with the hyperbole posts.
 
Back
Top