AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
I don't know if it's narratively that simple from a public perception stand point even if it's from what you would consider a vocal minority. There's actually quite a lot of sentiment with hoping that DLSS does get displaced, you've probably even seen some major publications trying to allude to this via Freesync (really Displayport Adaptive Sync) and G-Sync analogies.

Hypothetically for instance I can see way less pushback (considerably less) with a game choosing to implement FSR2 (or XeSS, but we still need to see) only and not DLSS than the reverse scenario. Even though that scenario is actually not optimal for people who have RTX graphics cards.

Ideally for consumers going forward the best scenario would be games supporting DLSS, FSR2 and XeSS. We'll have to see with XeSS but I still feel the likely scenario is that essentially each method favors the users of each respective vendor (aside from the non RTX scenario for Nvidia users). The Nvidia Streamline announcement with Intel agreeing to onboard did not seem to made as much of a splash in the news cycle. Pushing AMD to also onboard might be best scenario for everyone going forward.
 
Hypothetically for instance I can see way less pushback (considerably less) with a game choosing to implement FSR2 (or XeSS, but we still need to see) only and not DLSS than the reverse scenario. Even though that scenario is actually not optimal for people who have RTX graphics cards.

That would be only true for games without Raytracing. Games with Raytracing ignoring DLSS dont make any sense because >90% of the customer base on the PC has nVidia.
 
There is likely less pushback with an Unreal game choosing to implement TSR and not FSR2 since they are essentially the same technique.
 
There is likely less pushback with an Unreal game choosing to implement TSR and not FSR2 since they are essentially the same technique.
This one will be interesting to compare actually. Both are essentially similar in what they do but it is possible that performance and/or quality will be different.
 
There is likely less pushback with an Unreal game choosing to implement TSR and not FSR2 since they are essentially the same technique.
One could argue DLSS is just as much essentially the same technique. And XeSS. Only difference is how you pick and weight your samples, pretrained neural net or predefined algorithm
 
This didn't happen with the previous "DLSS killer" in the form of FSR1 and I will be surprised if it will with FSR2 as adding DLSS or FSR2 to a game which supports one of them should be a very easy task.
Well with FSR 1.0 the gap was far wider in actual output, but yeah it's more of a "I think DLSS is still superior enough to warrant it's inclusion" comment in response to some of the "FSR 2.0 now makes DLSS irrelevant" takes I've seen around. I'm just happy there's now some competition in this space that is at least comparable.
 
Also, praise (again, very early so we'll see if it holds up) for Deathloop's dynamic res with DLSS/FSR btw. I hope this becomes commonplace, I appreciate dynamic res in general when it's not too obvious, and as such am disappointed when often a PC version of a game doesn't have it, or just has a poor implementation that doesn't scale well. With DLSS/FSR it works exceptionally well here.

Couldn't agree more with this. I wish every game had this option (as long as it works well).
 
The initial reactions asking “why do we need AI” were obviously off the mark. FSR 2.0 is a big improvement over FSR 1.0 but there are clear weak points especially in performance mode. And that’s just in one game.

The problem behind using AI to improve image reconstruction is that it still won't help in benchmark comparisons. Spending hardware on a technique that has virtually no hope of ever been standardized isn't sound to most architects and the proposition for better upscaling becomes less compelling over time as review standards keep changing to test against higher resolutions/settings whenever new hardware releases. Pretty soon, most of the upcoming graphics solutions outside of integrated will be well capable of consistently getting good experiences at higher resolutions ...
 
The problem behind using AI to improve image reconstruction is that it still won't help in benchmark comparisons. Spending hardware on a technique that has virtually no hope of ever been standardized isn't sound to most architects and the proposition for better upscaling becomes less compelling over time as review standards keep changing to test against higher resolutions/settings whenever new hardware releases. Pretty soon, most of the upcoming graphics solutions outside of integrated will be well capable of consistently getting good experiences at higher resolutions ...
Not sure that makes sense. It’s software like anything else running over hardware. If your neural network performance improves software wise it runs better for all hardware as well. Not seeing any difference here over other non ML based techniques.
I don’t see a need for a standard deep learning upscaling algorithm, but the cost to develop one is significant so we won’t see many. One thing is certain, unless neural networks change, they will run on tensors.
 
Is that in terms of you preferring the frame rate to vary rather than the resolution?
Mostly in a sense that on a 240Hz display (as an example) hitting the fps limit in a modern game would be nearly impossible which would in turn mean that the DRS wouldn't know when to engage and would just run at the lowest range set by the developers most likely.
In other words I feel that DRS makes sense only when you lock to some framerate. Which is probably not what you would like to do on a 120+ Hz display.
 
The problem behind using AI to improve image reconstruction is that it still won't help in benchmark comparisons. Spending hardware on a technique that has virtually no hope of ever been standardized isn't sound to most architects and the proposition for better upscaling becomes less compelling over time as review standards keep changing to test against higher resolutions/settings whenever new hardware releases. Pretty soon, most of the upcoming graphics solutions outside of integrated will be well capable of consistently getting good experiences at higher resolutions ...

That seems counter intuitive. Given the emergence of more taxing rendering methods (Lumen, Nanite, RT) upscaling should see more usage in practice not less. I fully expect Ada and Navi 3x to rely on upscaling in upcoming games. The 6900xt gets under 20fps in 4K in Dying Light 2. I don't see how the 7900xt will get comfortably above 60 fps without upscaling.

And yes reviewers won't (and shouldn't) treat IHV sponsored upscaling as a standard setting. There are just too many variables.
 
I'm fairly sure the only people that called FSR1 a "DLSS killer" were a few AMD fanboys and "journalists" with clickbait headlines. No one with a brain thought that an image scaler would come close to DLSS.
I've ventured into r/amd on occasion, sweeet jesus
 
That seems counter intuitive. Given the emergence of more taxing rendering methods (Lumen, Nanite, RT) upscaling should see more usage in practice not less. I fully expect Ada and Navi 3x to rely on upscaling in upcoming games. The 6900xt gets under 20fps in 4K in Dying Light 2. I don't see how the 7900xt will get comfortably above 60 fps without upscaling.

And yes reviewers won't (and shouldn't) treat IHV sponsored upscaling as a standard setting. There are just too many variables.

Yes, I don't get how "upscaling will be unnecessary" when one of the main reasons we have these technologies now is that due to the cost of process shrinks, we're rapidly seeming diminishing returns with every new generation over the previous. This isn't 10 years ago when the new card was 80-100% faster every 2 years. The semiconductor industry in general is moving towards more bespoke solutions to deal with this.
 
Mostly in a sense that on a 240Hz display (as an example) hitting the fps limit in a modern game would be nearly impossible which would in turn mean that the DRS wouldn't know when to engage and would just run at the lowest range set by the developers most likely.
In other words I feel that DRS makes sense only when you lock to some framerate. Which is probably not what you would like to do on a 120+ Hz display.

That's an interesting point of view. Personally I prefer to target a lower framerate - 60fps being the ideal, and then maximise graphics at that framerate. So if I can use DRS to hit my target framerate (it's usually configurable) relatively consistently then I'm all for it as long as the visual hit isn't significant, which with DLSS I'd expect it to be barely noticeable. Then VRR can just clean up the occasional frame drops that DRS doesn't catch.
 
I tend to think that DRS is almost useless on PCs with adaptive sync displays.

Not everyone wants to use VRR. At least with my current 60 FPS setup, the random and jelly-like feel of VRR WRT the input -> display of input feedback latency for me is unpleasant in games. Variable latency in controls is extremely unpleasant outside of perhaps a 59-60 FPS range or at the extreme maybe 58-60 FPS range. At that point there's no reason to not just go for a locked 60. So for me, DRS is infinitely more preferable compared to VRR which makes VRR almost completely useless for me. However, to be fair, DRS is only useable by me if the transition between resolution levels is inperceptable.

That's where something like FSR 2.0 or DLSS 2.x can maybe come in handy. In those rare situations where it might need to kick in due to the IQ settings I choose I may not notice the artifacts that both FSR 2.0 and DLSS 2.x introduce into a game. Of course this would only work for me if I can set either FSR 2.0 or DLSS 2.x to kick in when resolution drops and NEVER at any other time.

Obviously not everyone is as affected by nor as sensitive to variability in the input -> display of input feedback loops, so VRR is fine for them in those conditions.

Once I can get ahold of a HDMI 2.1 GPU, I'll retest to see if VRR is more palatable. Perhaps a 110-120 FPS range wouldn't feel too horrible in games. I'm rather doubtful, but I'll be trying it just to see.

Regards,
SB
 
Last edited:
Not everyone wants to use VRR. At least with my current 60 FPS setup, the random and jelly-like feel of VRR WRT the input -> display of input feedback latency for me is unpleasant in games. Variable latency in controls is extremely unpleasant outside of perhaps a 59-60 FPS range or at the extreme maybe 58-60 FPS range. At that point there's no reason to not just go for a locked 60. So for me, DRS is infinitely more preferable compared to VRR which makes VRR almost completely useless for me.

Obviously not everyone is as affected by nor as sensitive to variability in the input -> display of input feedback loops, so VRR is fine for them in those conditions.

Once I can get ahold of a HDMI 2.1 GPU, I'll retest to see if VRR is more palatable. Perhaps a 110-120 FPS range wouldn't feel too horrible in games. I'm rather doubtful, but I'll be trying it just to see.

Regards,
SB

I agree on this. VRR is usefull when your game is hovering between 50-ish to 60fps, its it intended use. But when you can maintain a locked 60 or 120 etc which is possible in many titles theres not much use for it. And indeed, a range between 110/120fps isnt as bad as a 50 to 60fps variable.
 
Back
Top