with AMD's Frame Generation solution, you'll get an unstable frame pacing with constant stuttering even at high framerates, which will result in a very sluggish and stuttery experience
it is highly recommended to turn on Vsync and set a custom framerate limit a few frames below your monitors refresh rate, for example 58 FPS for 60 Hz or 142 FPS for 144 Hz
It is important to not hit the limit of your monitor's refresh rate to avoid an additional 30% of input latency increase from enabling Vsync
Also, FSR 3 Frame Generation in the current state does not work properly with Variable Refresh Rate (VRR), G-Sync or Freesync technologies
Forspoken in particular is a fast paced action game with a lot of small particle effects on screen during combat and the FSR upscaling solution just fails to render these details, producing a very blurry, pixelated and aliased image in motion, especially at 1080p and 1440p resolutions
Also, the FSR upscaling has very noticeable disocclusion artifacts around main character
all of these image quality issues are transformed into generated frames and they are even more noticeable when Frame Generation is enabled, creating an even more unstable image in motion
I do not think that is a good use case as CPU limited tends to mean an unstable frame-time - FSR3 seems to prefer GPU limiting as it is more consistent in frame-times so you can actually hit vsync threshhold and not start juddering below vsync.Ideal case may be cpu limited games where your gpu is not at 100% anyway.
Rich's video goes out today - I think the frame-pacing will maybe prove you right?These latencies numbers do not look right. I think they testing while standing? Then AMD doesnt use buffering. When you move around you really feel that FSR 3 is off.
Don't mix AFMF and FSR 3, they're not the same thing. AFMF disables itself if there's too much difference between frames, FSR 3 doesn't.These latencies numbers do not look right. I think they testing while standing? Then AMD doesnt use buffering. When you move around you really feel that FSR 3 is off.
/edit: Did another test V-Sync off and 100 FPS limit.
Adaptive-Sync doesnt work properly. For me it looks like that there are multiple scenarios in which FRS 3 is falling back to V-Sync and uses buffers to hit the refresh cycle.
A few examples with active Adaptive-Sync and 170Hz:
On a nVidia GPU there are a lot of buffers involved with FSR 3. Latency is 3x+ higher than without it.
- Without V-Sync FSR 3 is useless. There is no sync within the VRR range.
- With V-Sync and no frame limiter my display shows most the time 170hz regardless of the rendertime (i.e. 100 FPS). But FSR 3 is "resyncing" the display ASAP (guess the next normal frame) and the display overlay shows a lower number.
- With V-Sync and a frame limiter of 100 FPS Adaptive-Sync works but only under a certain GPU usage (<60%). When the GPU usage goes up, FSR 3 is falling back to 2.
- With V-Sync off and a (low) frame limiter of 100 FPS Adaptive-Sync works and latency was only twice as high (average pc latency 25ms -> 55ms). Best experience...
So the best result is archived by "low" GPU usage, V-Sync off and a frame limiter... Or you could just play without it and having the same frames, a cleaner and sharper image and lower latency. The only positive aspect of FSR 3 is that it looks smoother because of the blurrier picture. Like LCD vs. OLED at 30 FPS.
Hardware Unboxed doesn't like FSR3, stating it's much worse than DLSS3.
FSR3 doesn't work with Vsync off
FSR3 doesn't work with VRR
FSR3 with Anti-Lag+ delivers higher latency than DLSS3 with Reflex
FSR3's image quality is much worse than DLSS3
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.One thing that stood out to me was the UI elements, AMD's approach to only generate it on 'real' frames avoids any flickering with the UI and it's probably the best approach right now, but from the slowed down footage this skipping frames seemed evident - I can notice the UI judder which is a little disparate next to the background update. So either you get some UI breakup, or you get the UI updating at half the final refresh rate.
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.
AMDs solution you have the option to apply FG before UI elements? Seems crazy Nvidia don't offer this. If that's true, a weakness of how they're using the optical flow hardware to perform FG which isn't accessible during the render stages, only final output?