AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Once I can get ahold of a HDMI 2.1 GPU, I'll retest to see if VRR is more palatable. Perhaps a 110-120 FPS range wouldn't feel too horrible in games. I'm rather doubtful, but I'll be trying it just to see.
Most people won't notice fps fluctuating above 90 so limiting a game to 120 seems like a wrong idea - unless you just need a fully consistent input lag for some reason.
 
Most people won't notice fps fluctuating above 90 so limiting a game to 120 seems like a wrong idea - unless you just need a fully consistent input lag for some reason.

Fluctuations of 1-2 FPS above 90 FPS likely wouldn't be noticeable for me. Fluctuations of 5-10 FPS above 90 FPS might be tolerable. Fluctuations of 10-30 FPS above 90 FPS would be absolutely horrible. That would feel basically the same as a 5-15 FPS fluctuation between 45 and 60 FPS.

I'd absolutely hate every single moment in a game that was doing that. It's the magnitude of the swings (say in a smallish 0.1 second window) combined with the overall range of the swings (say in a relatively larger 2 second window).

It's why I just stopped using VRR after using it for a couple of days at a variety of framerate ranges. I thought maybe I could get used to it, but I couldn't. It was such a relief when I just disabled it and the control feedback loop in games once again felt consistent and accurate. I didn't realize just how bad it was until I disabled it.

But as I said, I'll be doing that experiment again once I have an HDMI 2.1 GPU so I'm not limited to running at 60 FPS.

Regards,
SB
 
Fluctuations of 10-30 FPS above 90 FPS would be absolutely horrible.
I legitimately can't notice them on m+k in an FPS game. Don't know what's so horrible about them.

That would feel basically the same as a 5-15 FPS fluctuation between 45 and 60 FPS
Nothing even remotely close.
45-60 is a fluctuation of 5.5ms.
90-120 is a fluctuation of 2.8ms.
That's twice less in maximum frametime difference.
2.8ms down from 60 fps is 51.5 fps, and yes it is also hard to notice with VRR active albeit it is easier on m+k due to higher absolute input lag at such low framerate.
 
I legitimately can't notice them on m+k in an FPS game. Don't know what's so horrible about them.

I never made any claims about how you feel about it or whether you would notice it. I'm not sure what that has to do with how I experience it?

Does that fact that I experience a fear of heights also imply that you also experience a fear of heights? :p

Nothing even remotely close.
45-60 is a fluctuation of 5.5ms.
90-120 is a fluctuation of 2.8ms.
That's twice less in maximum frametime difference.
2.8ms down from 60 fps is 51.5 fps, and yes it is also hard to notice with VRR active albeit it is easier on m+k due to higher absolute input lag at such low framerate.

Perhaps this will clear up any confusion on your end.

A 10-30 FPS fluctuation in a 90-120 FPS window would feel roughly the same as a 5-15 FPS fluctuation in a 45-60 FPS window.

[edit] OK, thinking about it more, that might not have clarified it and looking back at my previous post, it definitely isn't entirely clear. The assumption being made by me is that whatever is impacting the rendering to cause it to drop to X frames per second at 60 FPS would persist for a similar amount of time at 60 FPS as it would be at 120 FPS.

Hence, the assumption being that when it dropped to say 55 FPS with a cap of 60 FPS, if I were to adjust settings so that the game would run at 120 FPS, that whatever part of the game causes that drop would persist for 2 frames at ~110 FPS (2 frames rendered at 9.09 ms consecutively) or something similar say 1 at 9.17 ms and the next at 9.01 ms.

That assumption may not be entirely correct, hence why I plan to experiment with it again to see if a significantly wider VRR window at 120 FPS compared to the 1-2 FPS window at 60 FPS would be tolerable.

Regards,
SB
 
Last edited:
Not sure that makes sense. It’s software like anything else running over hardware. If your neural network performance improves software wise it runs better for all hardware as well. Not seeing any difference here over other non ML based techniques.
I don’t see a need for a standard deep learning upscaling algorithm, but the cost to develop one is significant so we won’t see many. One thing is certain, unless neural networks change, they will run on tensors.

Most benchmarks leave out upscaling performance comparisons altogether and will commonly run higher resolutions and higher presets as much as possible ...

That seems counter intuitive. Given the emergence of more taxing rendering methods (Lumen, Nanite, RT) upscaling should see more usage in practice not less. I fully expect Ada and Navi 3x to rely on upscaling in upcoming games. The 6900xt gets under 20fps in 4K in Dying Light 2. I don't see how the 7900xt will get comfortably above 60 fps without upscaling.

And yes reviewers won't (and shouldn't) treat IHV sponsored upscaling as a standard setting. There are just too many variables.

Sure you can point out the worst case current scenario as an example but it still doesn't really change my argument that the general trend we've been seeing is an increase in rendering resolution in benchmarks over the years despite introducing more complex rendering systems as well and we're still seeing high end graphical games featured in benchmarks that are being released without advanced features like ray tracing ... (nanite is still very much nonexistent in any real applications at this point)

Again spending hardware that's not going to be used in benchmarks won't help in comparisons between other vendors especially in your example where the game you pointed out is frequently criticized for it's questionable integration of the said technique. There's an argument to be had for improving ray tracing performance but that shouldn't be conflated as an argument for improving upscaling techniques because there's two entirely different basis behind their motivations. The former will have an actual tangible effect benchmarks while the latter won't be seen in benchmarks despite both being intended to improve performance ...
 
Sure you can point out the worst case current scenario as an example but it still doesn't really change my argument that the general trend we've been seeing is an increase in rendering resolution in benchmarks over the years despite introducing more complex rendering systems as well and we're still seeing high end graphical games featured in benchmarks that are being released without advanced features like ray tracing ... (nanite is still very much nonexistent in any real applications at this point)

Today’s worst case scenario isn’t going to get any better tomorrow. We’re only at the tip of the iceberg for RT usage. DL2 has single bounce GI and still uses SSR in some places. We have a lot of evidence now that native 4K rendering is extremely inefficient and often produces a worse result than the best upscaling techniques. As a developer why would you continue to invest in an inferior approach when you can get prettier pixels and better performance going the upscaling route?

4K TAA isn’t the holy grail here.

Again spending hardware that's not going to be used in benchmarks won't help in comparisons between other vendors especially in your example where the game you pointed out is frequently criticized for it's questionable integration of the said technique. There's an argument to be had for improving ray tracing performance but that shouldn't be conflated as an argument for improving upscaling techniques because there's two entirely different basis behind their motivations. The former will have an actual tangible effect benchmarks while the latter won't be seen in benchmarks despite both being intended to improve performance ...

I think you’re giving far too much weight to benchmarks. Real users do more with their hardware than reviewers do. They actually use the added features that most reviewers don’t cover. Upscaling included.

Resolution isn’t going to increase beyond 4K anytime soon. Rendering performance will be taxed due to an increase in pixel quality not count. Even if increased resolution was the target upscaling would still be a very valuable tool in the box.
 
VRR is also useful for allowing you to lock framerate at an arbitrary level rather than just 30/60fps.

Look how excited the console guys got over Ratchet and Clanks 40fps mode.

On PC with VRR that's possible at any framerate in every game.

They were pretty pumped about SSD and ray tracing too. Going from hdd to ssd back in 2008/2009 that was something, things loaded faster, no disk noise and everything was snappy. Problem was price/storage so usually the OS and one or two favourite games on it.
 
That assumption may not be entirely correct, hence why I plan to experiment with it again to see if a significantly wider VRR window at 120 FPS compared to the 1-2 FPS window at 60 FPS would be tolerable.
You should do that. I've explained why what you've said is wrong on a purely mathematical basis. But there's also the fact that fps is a relative metric, not absolute. The more fps you have - the smaller the impact of loosing the same number of fps is. Hence why it is noticeable when you get a slowdown from 60 to 45 but it is not when you get a slow down from 120 to 105. With higher fps numbers it is even less noticeable - if said slowdown happens over a course of several (dozens of) frames i.e. is not a hitch. I've been playing in 90-144 window for some time now and most of the time I can't tell the difference between 90 and 144 without toggling fps OSD. It's even harder on a gamepad - recently I've ran into a game which would disengage VRR during video cutscenes for some reason (some weirdness with using directflip I think) and it took me some time to even notice that I'm running at ~55 fps vsynced on a 120Hz TV now instead of running with VRR.

VRR is also useful for allowing you to lock framerate at an arbitrary level rather than just 30/60fps.

Look how excited the console guys got over Ratchet and Clanks 40fps mode.

On PC with VRR that's possible at any framerate in every game.
Yeah, but this is mostly usable for 40-55 fps locks I think. Anything which is able to run above 60 would probably be better off running unlocked - that's how I went through CP2077 at least, with fps being in 50-80 range, unlocked.
But maybe you guys are more sensitive to such fps changes.
 
Yeah, but this is mostly usable for 40-55 fps locks I think. Anything which is able to run above 60 would probably be better off running unlocked - that's how I went through CP2077 at least, with fps being in 50-80 range, unlocked.
But maybe you guys are more sensitive to such fps changes.

Better off?
What is the exact benefit of running the game unlocked if you cannot tell the difference between 60fps locked vs 60-80 unlocked?
 
Better off?
What is the exact benefit of running the game unlocked if you cannot tell the difference between 60fps locked vs 60-80 unlocked?
No worries about it going below the lock.
I could ask you the same - what are the benefits of running the game locked at 60 is the h/w is capable of running in at 70-80?
 
Don't care about power consumption and don't really notice any difference in noise and heat from locking to 60 a game which runs at 50-80 unlocked.
Many GPU's can have significant thermal differences with a varying ~80% load vs 100%. For my 3060 for example, a game running 99-100% GPU at all times means ~72-75C and a fan going 65% to keep it even at that - which is far more noticeable in terms of noise than one that bounces around 50% fan speed when the load is a variable ~80% and the temp is under 70.
 
No worries about it going below the lock.
I could ask you the same - what are the benefits of running the game locked at 60 is the h/w is capable of running in at 70-80?

Why would I worry about going below the lock? That makes no sense, if it can't keep 60 up then running it unlocked won't be better.

So it seems we reached the conclusion that there are no benefits from running unlocked, while there are benefits from running locked which you do not acknowledge. Let's just leave it there....
 
Why would I worry about going below the lock? That makes no sense, if it can't keep 60 up then running it unlocked won't be better.

So it seems we reached the conclusion that there are no benefits from running unlocked, while there are benefits from running locked which you do not acknowledge. Let's just leave it there....
Sure, same "benefits" as from running a game at 15 fps...
If your concern in power/noise you'd be better off with downvolting/downclocking.
The idea here is that locking a game to some fps means that you're locking yourself out of a possibility that this game may run at a higher fps. While 50-60 may not be much of a difference 50-90 would be quite a big one and it's very possible that a game which at some scene runs at 50 will run at 90 in others. Whether you prefer to play with a gimped fps all the time simply because you can't bear them changing in this range is up to you of course.
 
Last edited:
Reducing frame rate using Radeon Chill not only lowers power consumption, but also decreases input latency. I'm not aware that downvolting / downclocking would have such an effect.
 
Most benchmarks leave out upscaling performance comparisons altogether and will commonly run higher resolutions and higher presets as much as possible ...



Sure you can point out the worst case current scenario as an example but it still doesn't really change my argument that the general trend we've been seeing is an increase in rendering resolution in benchmarks over the years despite introducing more complex rendering systems as well and we're still seeing high end graphical games featured in benchmarks that are being released without advanced features like ray tracing ... (nanite is still very much nonexistent in any real applications at this point)

Again spending hardware that's not going to be used in benchmarks won't help in comparisons between other vendors especially in your example where the game you pointed out is frequently criticized for it's questionable integration of the said technique. There's an argument to be had for improving ray tracing performance but that shouldn't be conflated as an argument for improving upscaling techniques because there's two entirely different basis behind their motivations. The former will have an actual tangible effect benchmarks while the latter won't be seen in benchmarks despite both being intended to improve performance ...
Surely you see the possibility of this changing however?

The raw resolution was only used as a metric for optical clarity. More pixels = better, at the most basic of measurements. This is understandable to most people. But eventually, you're going to have to move into the territory of actual image clarity, which is entirely subjective, side by side image comparison and grading etc. If A can output a better image than B, it shouldn't matter if it's reconstructed or not. That's the direction we're headed. We need to move on from just making GPUs do computational work.
 
Back
Top