AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Watched the video on a full monitor. I would say the TPU write up is pretty accurate in this example. Should this quality level be maintained across the breadth of rendering scenarios DLSS would no longer be necessary. It remains to be seen how this fares in tougher scenes though. This game is going to be one of the easier test cases.
 
I think it's good enough for the masses which is all it needs to be. If most people deem it as good enough to use and they use it, then Nvidia's got a problem on their hand. It'll just be like G-Sync vs Freesync. G-sync was better but most people purchased free sync monitors because it was good enough and much cheaper.

It wont be the case here. Every nvidia card from now on, including the cheapest will support dlss and amd cards are far from being much cheaper or of interest to people

https://www.tomshardware.com/uk/news/newegg-best-sellers-dominated-by-nvidia

FSR 1 which was supposed to be this easy to implement feature that would run on anything lost all steam quickly after launch and the harder to implement dlss is in like three times more games.
 
So much for "AI making impossible possible". Performance practically equal without any need for matrix crunching HW
i assume that Nvidia knew how to do it without Tensor cores, but made decision to charge fanboys for extra bucks because of AI.
 
i assume that Nvidia knew how to do it without Tensor cores, but made decision to charge fanboys for extra bucks because of AI.
or maybe trojan horse to make the ML stuff more entrenched in nvidia proprietary tech. As OpenCL was getting traction, less and less CUDA exclusive, then tensor cores come and its all tensor nowadays.
 
If we look at from an "information perspective", the reason why the newly introduced temporal upscaling method is largely comparable to prior state of the art solutions is solely down to the fact that they virtually use the same amount of information!

There's no fundamental advantage between whether or not a given black boxes chooses to use neural networks with the same amount of information ...
 
If we look at from an "information perspective", the reason why the newly introduced temporal upscaling method is largely comparable to prior state of the art solutions is solely down to the fact that they virtually use the same amount of information!

There's no fundamental advantage between whether or not a given black boxes chooses to use neural networks with the same amount of information ...
maybe because this nonlinear black box can better estimate than hand written algorithm if the algorithm is too simple ?
 
i assume that Nvidia knew how to do it without Tensor cores, but made decision to charge fanboys for extra bucks because of AI.

Nvidia isn’t the only company with expertise in image processing. If it was “easy” to upscale images with high quality everyone would be doing it. There’s no reason to think Nvidia had some magic algo but kept it hidden because they’re evil capitalists.
 
i assume that Nvidia knew how to do it without Tensor cores, but made decision to charge fanboys for extra bucks because of AI.
Off course it knew how to do TAAU and did in the Control in 2019 for example. The difference with ML was explained back then in 2019 and highlighted once again in Control in 2020.
The innovation of the FSR 2.0 is the heuristic that locks thin geometry features, such as wires, which doesn't work all the time even in techpowerup video so you can clearly see how it breaks on the tractor's tracks in movement - does mean the heuristic is more fragile than DLSS that doesn't fail in this case.
Wonder how it would work on more complex scenes with particles or foliage or vent system such as in the NVIDIA's Control video. It took just 2 years to come up with the thin features locking heuristic and there is still lossy color clamping used which will likely errode particles or moving foliage as TAA usually does.
And somebody is asking why AI is used, because there is no sense in spending 2 years of programmers time to partially solve a single issue, though partially solved problem might certaily look OK for people who doesn't pay attention to details or doesn't know where to look at.
 
Lol what? One upscaling algorithm in one game means humans can write code equivalent to ML inference? Please inform all the companies spending billions on ML research.

Indeed. The techpowerup article completely misses the point, it reads more like a rant (even on ray tracing) then a professional review/analysis. FSR2.0 is kinda like TAAU, or the (better) custom solutions console developers have come up with, like Insomniac for example. FSR2.0 isn't a DLSS-killer, its merely complementing it in the pc space.
They are completely different technologies alltogether, one is AI/ML the other is not. Besides, FSR2.0 isn't beating nor even matching DLSS in AMD's deathloop to begin with.

The Digital Foundry analysis will be much more intresting down the line. Also to note, quite childish with the anti-ihv and CEO attacks going on. As a PC gamer this FSR2.0 is a very, very good thing to enter the platform. I assume that AMD will also provide their own (accelerated) AI/ML solution in the future, hand-coded stuff like FSR forever? probably not.

Also, those who where opposed to DLSS due to blur and all, why would FSR2.0 be any good for them?
 
Obviously there has to be a lot more significant testing and more titles to sample from, but I expected more of a performance benefit for DLSS as well but the performance is practically the same. Initial results seem rather impressive for AMD.
 
In the review were there any performance comparisons between AMD and Nvidia gpu's without using FSR/DLSS?
The game is likely optimized to favor one brand and should be interesting once we look at other game developers/engines.

Since FSR 2.0 isn't using ML, the algorithmn needs to be manually optimized for differences found in each game/game engine?
Seems like it will take much longer to implement (vs DLSS) despite AMD's statement of minimum 3 days and up to 4-5 weeks.
 
In the review were there any performance comparisons between AMD and Nvidia gpu's without using FSR/DLSS?

The game is likely optimized to favor one brand and should be interesting once we look at other game developers/engines.
Why would there have been? There was no AMD cards involved in the review, it was RTX 3060 running both DLSS and FSR 2.0
Since FSR 2.0 isn't using ML, the algorithmn needs to be manually optimized for differences found in each game/game engine?
I don't see why it would be needed any more than with ML. ML doesn't "optimize DLSS" for each game.
Seems like it will take much longer to implement (vs DLSS) despite AMD's statement of minimum 3 days and up to 4-5 weeks.
How so?
 
Been some Intresting takes based on this single game.
Anyway, I fully expected any half decent TAAU to do a good job going from 1440p to 4K.
So I'm happy that it's of to a good start. Competition is good and will only make everyone raise bar quicker.

What I'm really intrested in, is seeing how it compares at lower base resolutions and output resolutions.
So performance & balanced modes for 1080p & 1440p.
Also more performance & balanced at 4K examples.
 
I don't see why it would be needed any more than with ML. ML doesn't "optimize DLSS" for each game.

What are all those different DLL versions about then?

Anyway, I fully expected any half decent TAAU to do a good job going from 1440p to 4K.
So I'm happy that it's of to a good start. Competition is good and will only make everyone raise bar quicker.

Right, if FSR 2.0 is basically TAAU the results aren’t surprising given the high quality we’ve seen from TAAU already. It remains to be seen how well it holds up across a variety of games and art styles.
 
Last edited:
What are all those different DLL versions about then?
Different versions of DLSS, not game specific. You can actually switch to different DLSS version in several 2.x games by just replacing the DLL with different version. But same version is the same no matter what game it comes with.
 
Different versions of DLSS, not game specific. You can actually switch to different DLSS version in several 2.x games by just replacing the DLL with different version. But same version is the same no matter what game it comes with.

Yes but how do you know the trained ML model in each DLL wasn’t optimized for a particular game?
 
Back
Top