AMD's FSR 3 upscaling and frame interpolation *spawn

TPU tested FSR3 vs DLSS, in summary without VSync, you get stutterers all over the place, and the image quality remains far worse than DLSS.

with AMD's Frame Generation solution, you'll get an unstable frame pacing with constant stuttering even at high framerates, which will result in a very sluggish and stuttery experience

it is highly recommended to turn on Vsync and set a custom framerate limit a few frames below your monitors refresh rate, for example 58 FPS for 60 Hz or 142 FPS for 144 Hz
It is important to not hit the limit of your monitor's refresh rate to avoid an additional 30% of input latency increase from enabling Vsync
Also, FSR 3 Frame Generation in the current state does not work properly with Variable Refresh Rate (VRR), G-Sync or Freesync technologies

Forspoken in particular is a fast paced action game with a lot of small particle effects on screen during combat and the FSR upscaling solution just fails to render these details, producing a very blurry, pixelated and aliased image in motion, especially at 1080p and 1440p resolutions

Also, the FSR upscaling has very noticeable disocclusion artifacts around main character

all of these image quality issues are transformed into generated frames and they are even more noticeable when Frame Generation is enabled, creating an even more unstable image in motion

 
Not sure this is possible or not, Hopefully modders can combine dlss upscaling with FSR frame generation. That'll make FSR fg have better base frame for generation
 
Ideal case may be cpu limited games where your gpu is not at 100% anyway.
I do not think that is a good use case as CPU limited tends to mean an unstable frame-time - FSR3 seems to prefer GPU limiting as it is more consistent in frame-times so you can actually hit vsync threshhold and not start juddering below vsync.
 
And you should play without Adaptive-Sync with less than native 60 FPS. In Forspoken and Immortals FSR 3 breaks the LFC of the nVidia driver so my display is flickering because it looks like that it doesnt get a new frame to refresh properly.
 
These latencies numbers do not look right. I think they testing while standing? Then AMD doesnt use buffering. When you move around you really feel that FSR 3 is off.
 
These latencies numbers do not look right. I think they testing while standing? Then AMD doesnt use buffering. When you move around you really feel that FSR 3 is off.
Don't mix AFMF and FSR 3, they're not the same thing. AFMF disables itself if there's too much difference between frames, FSR 3 doesn't.
 
/edit: Did another test V-Sync off and 100 FPS limit.

Adaptive-Sync doesnt work properly. For me it looks like that there are multiple scenarios in which FRS 3 is falling back to V-Sync and uses buffers to hit the refresh cycle.
A few examples with active Adaptive-Sync and 170Hz:
  1. Without V-Sync FSR 3 is useless. There is no sync within the VRR range.
  2. With V-Sync and no frame limiter my display shows most the time 170hz regardless of the rendertime (i.e. 100 FPS). But FSR 3 is "resyncing" the display ASAP (guess the next normal frame) and the display overlay shows a lower number.
  3. With V-Sync and a frame limiter of 100 FPS Adaptive-Sync works but only under a certain GPU usage (<60%). When the GPU usage goes up, FSR 3 is falling back to 2.
  4. With V-Sync off and a (low) frame limiter of 100 FPS Adaptive-Sync works and latency was only twice as high (average pc latency 25ms -> 55ms). Best experience...
On a nVidia GPU there are a lot of buffers involved with FSR 3. Latency is 3x+ higher than without it.

So the best result is archived by "low" GPU usage, V-Sync off and a frame limiter... Or you could just play without it and having the same frames, a cleaner and sharper image and lower latency. The only positive aspect of FSR 3 is that it looks smoother because of the blurrier picture. Like LCD vs. OLED at 30 FPS.

This is how it looks on TV's OSD with framerate limit of 90 and no Vsync in nvcp.

 
Hardware Unboxed doesn't like FSR3, stating it's much worse than DLSS3.

FSR3 doesn't work with Vsync off
FSR3 doesn't work with VRR
FSR3 with Anti-Lag+ delivers higher latency than DLSS3 with Reflex
FSR3's image quality is much worse than DLSS3


Very solid overall (it's Tim of course).

One thing that stood out to me was the UI elements, AMD's approach to only generate it on 'real' frames avoids any flickering with the UI and it's probably the best approach right now, but from the slowed down footage this skipping frames seemed evident - I can notice the UI judder which is a little disparate next to the background update. So either you get some UI breakup, or you get the UI updating at half the final refresh rate.

Also I get AMD's reasoning for tying it to FSR2, but that's ultimately going to hurt its adoption I think. There are a ton of Turing/Ampere users that if AMD can get the vsync issues sorted, might want to have it as an option in their games...if it could work with DLSS2 upscaling. Forcing those users to apply an inferior method of upscaling to get the frame generation is not going to engender more requests to developers to support FSR3.

It would also serve to put a thumb in Nvidia's eye if there was a competent frame generation feature for millions of Nvidia cards that Nvidia themselves are saying is not possible, but as engaging it now means a (sometimes significant) reduction in image quality, I doubt it's going to be more than a curiosity for those users.
 
Last edited:
One thing that stood out to me was the UI elements, AMD's approach to only generate it on 'real' frames avoids any flickering with the UI and it's probably the best approach right now, but from the slowed down footage this skipping frames seemed evident - I can notice the UI judder which is a little disparate next to the background update. So either you get some UI breakup, or you get the UI updating at half the final refresh rate.
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.

AMDs solution you have the option to apply FG before UI elements? Seems crazy Nvidia don't offer this. If that's true, a weakness of how they're using the optical flow hardware to perform FG which isn't accessible during the render stages, only final output?
 
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.

AMDs solution you have the option to apply FG before UI elements? Seems crazy Nvidia don't offer this. If that's true, a weakness of how they're using the optical flow hardware to perform FG which isn't accessible during the render stages, only final output?

Nvidia does offer that. When DLSS FG launched the support wasn't there, but it has been for a while.
 
Back
Top