AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
We should be in a position to determine whether the XeSS algorithm is more performant/better quality through either using DP4a or XMX hardware capabilities. Most likely XMX hardware will win hands down.
 
FSR2.0 is what has been used on console exclusives for ages, Digital Foundry has explained this.
Sure. But do you think that AMD wouldn't have made an ML enhanced version if it would be possible to run such version without ML h/w?
DP4a version of XeSS won't run on RDNA1/PS5 h/w (which may explain why AMD don't want to go there) and it remains to be seen how well it will run in general. Intel has been rather silent on performance implications so far.
 
Can you please elaborate?

Digital Foundry has explained this in one of their videos, i think it was one of these DF Directs they do on mondays. A supporter asked about FSR2.0 on consoles, their answer was that on consoles they already have such solutions (like TAAU in UE5, or the solutions Insomniac have etc) and thus FSR2.0 is more of a pc centric thing than consoles.

Sure. But do you think that AMD wouldn't have made an ML enhanced version if it would be possible to run such version without ML h/w?
DP4a version of XeSS won't run on RDNA1/PS5 h/w (which may explain why AMD don't want to go there) and it remains to be seen how well it will run in general. Intel has been rather silent on performance implications so far.

They would if their hw supported it, Intel and NV doing it already. AI/ML is one of these things that is the future (look at smartphones, their biggest increases are in neural hw acceleration). What i wanted to explain is that FSR2.0 is something that was lacking in the pc space, the consoles had their custom solutions like that.
I am sure that XeSS/DLSS using hardware ML acceleration is always going to be able to stretch their legs more than AMD's solution, untill AMD RDNA3+ or later, their abit behind but they are sure not sitting still. They compete in rasterization since RDNA, now RT and AI/ML is on the menu. It is the correct order imo. GPU's now finally start to become available again and prices sink, so their RDNA1/2 behing so much behind wasn't really a large loss.
 
Digital Foundry has explained this in one of their videos, i think it was one of these DF Directs they do on mondays. A supporter asked about FSR2.0 on consoles, their answer was that on consoles they already have such solutions (like TAAU in UE5, or the solutions Insomniac have etc) and thus FSR2.0 is more of a pc centric thing than consoles.
That's not what you said, you stated that console exclusives are already using FSR2.0. There's a big difference between using a specific AMD algorithm and temporal upscaling solutions in general. The latter is already known and is also true for the PC space.
 
That's not what you said, you stated that console exclusives are already using FSR2.0. There's a big difference between using a specific AMD algorithm and temporal upscaling solutions in general. The latter is already known and is also true for the PC space.

FSR2.0 like implementations, i refer to DF's discussion on that. There are not many reasons why Insomniac for example would implement FSR2.0 since their solution is similar enough if not better.
 
Like most things like this, is AMD lying, no.
But it's from their perspective and presentation of approach.

Nvidia could easily say, the most important part of TAAU is the temporal frame reconstruction as that's where your able to pull out detail that's otherwise lost.
The rest is edge detection and sharpening that we do.
So maybe from their point of view that's where the magic needs to happen and does.

Neither lying, neither wrong in approach.

What we can say is that since DLSS (2.x especially) Nvidia has had the best general engine reconstruction so far. Who knows where they'll improve it next.

All companies are currently approaching it based on their own circumstances. Competition and different approaches are good to push this forward.
 
Last edited:
The sole fact that FSR 2.0 doesn't use ML should give you a nice hint.
How so? Using ML is inherently worse than hand tuned algorithms as long as you have the time and engineering power to come up with the algorithm(s) producing identical results
 
There must exist some algorithm, that can do what DLSS can do though, right? I mean if we ask God or some super-intelligent A.I.

It's just a question of whether it is human-findable and how close we can approximate it.
Sure. But you will likely need a 10x times faster shading h/w to run this algorithm than what DLSS needs in terms of ML h/w transistor footprint. It will also likely to take close to infinite time to actually find this algorithm.
 
DLSS looks much better in the City sample demo than TSR or TAA. Do you think that Epic hasnt had "the time and engineering power to come up with the algorithm(s) producing identical results"?

As an engine maker, Epic are incentivized to spend more time attempting to create a generalized solution than a hand tuned customized solution for a singular scene or implementation (demo or game). They would be doing a disservice to their various customers (other developers using the engine) if they were instead showing a hand tuned customized solution for their demo that wouldn't be representative of what another developer would see when they used the generalized version that UE includes.

In that respect DLSS is certainly better as a ML assisted generalized solution versus a non-ML assisted generalized solution.

Regards,
SB
 
Sure. But you will likely need a 10x times faster shading h/w to run this algorithm than what DLSS needs in terms of ML h/w transistor footprint. It will also likely to take close to infinite time to actually find this algorithm.
Maybe you would need faster hardware for the algorithm, maybe it would need far less - before such algorithm is found we don't know that. Heck, nothing would prevent one developing algorithm that utilizes matrix accelerators. You keep calling them "ML h/w" but ML is just one type of load that's suitable for matrix accelerators, not the only one.
 
As an engine maker, Epic are incentivized to spend more time attempting to create a generalized solution than a hand tuned customized solution for a singular scene or implementation (demo or game). They would be doing a disservice to their various customers (other developers using the engine) if they were instead showing a hand tuned customized solution for their demo that wouldn't be representative of what another developer would see when they used the generalized version that UE includes.

In that respect DLSS is certainly better as a ML assisted generalized solution versus a non-ML assisted generalized solution.

Regards,
SB

That is like claiming hand writing is better than letterpress because it would be unfair to the authors to do their work. Most developers dont care about upscaling. They want an allround solution. So Epic has every incentiv to provide best in class upscaling.
Only paid publisher and developer will optimize outdated tech for their product. The same reason why raytracing exists - it democratizes best in class rendering and graphics.
 
That is like claiming hand writing is better than letterpress because it would be unfair to the authors to do their work. Most developers dont care about upscaling. They want an allround solution. So Epic has every incentiv to provide best in class upscaling.
Only paid publisher and developer will optimize outdated tech for their product. The same reason why raytracing exists - it democratizes best in class rendering and graphics.
Yes, it is like claiming hand writing is better than letterpress. And they're both true too. Highest end stuff is handmade, good enough all around mass produced.
Same applies here, best solutions will be hand tuned, but Epic has provided good enough mass solution for those that don't have the brain, the time or the money for fancier stuff.
 
Back
Top