AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
You know if this is all the AI does then Alex also needs to watch that presentation because he thinks machine learning means the AI thinking : " oh this is a wire so it needs to be a full line... " :)
He was talking about TAAU, not DLSS, and the point is that it is able to reconstruct the line due to the jitter introduced by the sampling process, not because there is some "A.I" guessing what a line should look like.

Edit: You linked to an earlier video where he was talking about DLSS 1.0, but he explains his reasoning in the latest video, in relation to TAAU.
 
He was talking about TAAU, not DLSS, and the point is that it is able to reconstruct the line due to the jitter introduced by the sampling process, not because there is some "A.I" guessing what a line should look like.

Edit: You linked to an earlier part where he was talking about DLSS 1.0, but he explains his reasoning later, in relation to TAAU.
"DLSS 1.0 was using machine learning to get a better understanding of oh this is a wire so it needs to be a full line..."
What reasoning? Maybe that was changed with DLSS 2.0 and with the advancements in AI. It was able to understand when a wire is displayed in DLSS 1.0 but now it got smarter and all it does is to "more intelligently combine samples taken over multiple frames, by throwing away less data...".
 
"DLSS 1.0 was using machine learning to get a better understanding of oh this is a wire so it needs to be a full line..."
What reasoning? Maybe that was changed with DLSS 2.0 and with the advancements in AI. It was able to understand when a wire is displayed in DLSS 1.0 but now it got smarter and all it does is to "more intelligently combine samples taken over multiple frames, by throwing away less data...".
Listening to that clip again, he indeed seemed to be talking about DLSS 1.0's ability to "guess" missing detail – which isn't present in DLSS 2.0, as Nvidia make clear in that presentation. As they describe, the problem with the former approach is that it can "hallucinate" detail that isn't there in the native image.
 
Listening to that clip again, he indeed seemed to be talking about DLSS 1.0's ability to "guess" missing detail – which isn't present in DLSS 2.0, as Nvidia make clear in that presentation. As they describe, the problem with the former approach is that it can "hallucinate" detail that isn't there in the native image.

I was sure DLSS 2.0 was still an AI upscaler? I remember seeing a highly upscaled Control video using it, and it having a bunch of false noise detail. Or is it just using AI to guess history rejection? In which case it really isn't any different from TAAU except via the methods it's utilizing temporal data.

Well it'd be nice if Nvidia specified at all what they're doing; but at least I guess they revealed that much.
 
I was sure DLSS 2.0 was still an AI upscaler? I remember seeing a highly upscaled Control video using it, and it having a bunch of false noise detail. Or is it just using AI to guess history rejection? In which case it really isn't any different from TAAU except via the methods it's utilizing temporal data.

Well it'd be nice if Nvidia specified at all what they're doing; but at least I guess they revealed that much.

I believe DLSS 2.0 is essentially that. It is trained vs hand-tuned heuristics for upscale, anti-aliasing and sharpening. The inference model takes in a colour, motion vector, depth and I think luminance. Maybe something else I'm forgetting.
 
You tell me, the bottom of that post you just quoted has comparison images.

Please point out with specifics which areas you think look better or worse and why.

Textures in FSR shots look sharper for reasons well documented in this thread. The brick wall in FSR at Quality/Ultra looks pretty good in that particular shot. If nothing else FSR is a great advertisement for sprinkling a little sharpening even at native resolution in some games.

There are lots of other cases where sharpening just looks terrible because it emphasizes artifacts - specular aliasing, grainy reflections, hair etc. Presumably developers will be able to tweak the amount of sharpening FSR applies in their specific game.
 
Well it'd be nice if Nvidia specified at all what they're doing; but at least I guess they revealed that much.
I guess they would prefer to keep the storyline about that special bit of AI/DL/ML magic, that fits their narrative of useful tensor cores for gamers so well.
 
I was sure DLSS 2.0 was still an AI upscaler? I remember seeing a highly upscaled Control video using it, and it having a bunch of false noise detail. Or is it just using AI to guess history rejection? In which case it really isn't any different from TAAU except via the methods it's utilizing temporal data.

Well it'd be nice if Nvidia specified at all what they're doing; but at least I guess they revealed that much.
You are right it sounds like an enhanced TAAU and it makes you wonder if there is any reason to "train" the neural network with all kinds of images. It made more sense if it was based on inference, if it "knew" how a wire looks and was trying to reconstruct it from very little input data.
 
You are right it sounds like an enhanced TAAU and it makes you wonder if there is any reason to "train" the neural network with all kinds of images. It made more sense if it was based on inference, if it "knew" how a wire looks and was trying to reconstruct it from very little input data.

Training DLSS to identify specific higher order objects like wires actually doesn’t make sense at all. It’s simply trying to guess the right color for each pixel which is a much more fundamental problem that includes wires and any other shape you can imagine. It could very well be the case that there is a combination of weights and nodes in the DLSS network that implicitly corresponds to “this pixel is part of a line” given the history buffer, depth buffer and surrounding pixel neighborhood.

TAAU (and FSR) also tries to guess the right color for each pixel. The only difference is that it’s hand coded and by definition limited in the practical number of scenarios it can handle optimally. If there is a truly generic and concise algorithm that works for every pixel in every frame from every game then hand coding a bunch of if-then-else statements is just fine.

The whole point of ML is to iteratively and automatically develop a solution where there is no such hand coded algorithm that works optimally for every pixel in every game (and is fast enough for real time).
 
Somebody claimed to have used precompiled FSR shader from another game to add an FSR mod for GTA V.

sources:
https://github.com/NarutoUA/gta5_fsr
Looks quite nice, just as expected.
I wonder what AA he used, there was still quite a bit of aliasing left.. so perhaps not TXAA.

Will have to test someday, might have GTAV installed.
If slider goes really low this might be very interesting.
Use TXAAx4, 25% scaler and full 200% version of DSR..
Should be decently temporally stable image, blurred upscaled by FSR and DSR to get back to native resolution.

Might be nice soft look.. (Perhaps with CRT shader on top.)
 
Last edited:
Looks quite nice, just as expected.
I wonder what AA he used, there was still quite a bit of aliasing left.. so perhaps not TXAA.

Will have to test someday, might have GTAV installed.
If slider goes really low this might be very interesting.
Use TXAAx4, 25% scaler and full 200% version of DSR..
Should be decently temporally stable image, blurred upscaled by FSR and DSR to get back to native resolution.

Might be nice soft look.. (Perhaps with CRT shader on top.)
TXAA in this game is broken. It looks nearly indistinguishable from MSAA+FXAA.
 
Screenshot2021070113.png

https://store.steampowered.com/news/app/700600/view/3021333905703833211

Were there any tests of FSR in Evil Genius 2 anywhere?
 
Can GTA mods inject shaders in the rendering path prior to post process and HUD?
 
Somebody claimed to have used precompiled FSR shader from another game to add an FSR mod for GTA V.

sources:
https://github.com/NarutoUA/gta5_fsr
that guy is a god! Also he made the best possible comparative ever, by uploading the images where he uploaded them (framerate included!).

Screengrabs of the comparative he uploaded, they have the best visualisation system I've seen yet:

Original upscaler VS FSR upscaler: https://screenshotcomparison.com/comparison/15394
Native VS FSR #1: https://screenshotcomparison.com/comparison/15427
Native VS FSR #2: https://screenshotcomparison.com/comparison/15428
 
Back
Top