Yes, however we have no clue if it's actually necessary or would the process be fast enough without.
To NV's own account it is for their implementation.
Yes, however we have no clue if it's actually necessary or would the process be fast enough without.
We should be in a position to determine whether the XeSS algorithm is more performant/better quality through either using DP4a or XMX hardware capabilities. Most likely XMX hardware will win hands down.
The sole fact that FSR 2.0 doesn't use ML should give you a nice hint.Yes, however we have no clue if it's actually necessary or would the process be fast enough without.
The sole fact that FSR 2.0 doesn't use ML should give you a nice hint.
Can you please elaborate?FSR2.0 is what has been used on console exclusives for ages.
Sure. But do you think that AMD wouldn't have made an ML enhanced version if it would be possible to run such version without ML h/w?FSR2.0 is what has been used on console exclusives for ages, Digital Foundry has explained this.
Can you please elaborate?
Sure. But do you think that AMD wouldn't have made an ML enhanced version if it would be possible to run such version without ML h/w?
DP4a version of XeSS won't run on RDNA1/PS5 h/w (which may explain why AMD don't want to go there) and it remains to be seen how well it will run in general. Intel has been rather silent on performance implications so far.
That's not what you said, you stated that console exclusives are already using FSR2.0. There's a big difference between using a specific AMD algorithm and temporal upscaling solutions in general. The latter is already known and is also true for the PC space.Digital Foundry has explained this in one of their videos, i think it was one of these DF Directs they do on mondays. A supporter asked about FSR2.0 on consoles, their answer was that on consoles they already have such solutions (like TAAU in UE5, or the solutions Insomniac have etc) and thus FSR2.0 is more of a pc centric thing than consoles.
That's not what you said, you stated that console exclusives are already using FSR2.0. There's a big difference between using a specific AMD algorithm and temporal upscaling solutions in general. The latter is already known and is also true for the PC space.
How so? Using ML is inherently worse than hand tuned algorithms as long as you have the time and engineering power to come up with the algorithm(s) producing identical resultsThe sole fact that FSR 2.0 doesn't use ML should give you a nice hint.
DLSS proves that this is wrong.How so? Using ML is inherently worse than hand tuned algorithms as long as you have the time and engineering power to come up with the algorithm(s) producing identical results
There must exist some algorithm, that can do what DLSS can do though, right? I mean if we ask God or some super-intelligent A.I.DLSS proves that this is wrong.
Sure. But you will likely need a 10x times faster shading h/w to run this algorithm than what DLSS needs in terms of ML h/w transistor footprint. It will also likely to take close to infinite time to actually find this algorithm.There must exist some algorithm, that can do what DLSS can do though, right? I mean if we ask God or some super-intelligent A.I.
It's just a question of whether it is human-findable and how close we can approximate it.
How so? Using ML is inherently worse than hand tuned algorithms as long as you have the time and engineering power to come up with the algorithm(s) producing identical results
DLSS looks much better in the City sample demo than TSR or TAA. Do you think that Epic hasnt had "the time and engineering power to come up with the algorithm(s) producing identical results"?
Maybe you would need faster hardware for the algorithm, maybe it would need far less - before such algorithm is found we don't know that. Heck, nothing would prevent one developing algorithm that utilizes matrix accelerators. You keep calling them "ML h/w" but ML is just one type of load that's suitable for matrix accelerators, not the only one.Sure. But you will likely need a 10x times faster shading h/w to run this algorithm than what DLSS needs in terms of ML h/w transistor footprint. It will also likely to take close to infinite time to actually find this algorithm.
As an engine maker, Epic are incentivized to spend more time attempting to create a generalized solution than a hand tuned customized solution for a singular scene or implementation (demo or game). They would be doing a disservice to their various customers (other developers using the engine) if they were instead showing a hand tuned customized solution for their demo that wouldn't be representative of what another developer would see when they used the generalized version that UE includes.
In that respect DLSS is certainly better as a ML assisted generalized solution versus a non-ML assisted generalized solution.
Regards,
SB
Yes, it is like claiming hand writing is better than letterpress. And they're both true too. Highest end stuff is handmade, good enough all around mass produced.That is like claiming hand writing is better than letterpress because it would be unfair to the authors to do their work. Most developers dont care about upscaling. They want an allround solution. So Epic has every incentiv to provide best in class upscaling.
Only paid publisher and developer will optimize outdated tech for their product. The same reason why raytracing exists - it democratizes best in class rendering and graphics.