Nvidia Blackwell Architecture Speculation

  • Thread starter Deleted member 2197
  • Start date
V-Sync on fixes that, but I guess it could use Reflex to do the limiting.
Reflex will do the limiting with FG since FG force enables Reflex. With vsync on on a Gsync monitor enabling Reflex will limit the fps below monitor's maximum refresh (116 for 120, 138 for 144, etc) to ensure that vsync rarely engages.
You can enable the same behavior from the driver though if you enable ultra low latency and then either force vsync there or enable it in game. This will also result in the same sub-refresh limit active on Gsync monitors.
NULL's limiter in the driver uses the same code as Reflex so this is something like forcing Reflex albeit less effective than when it is integrated into a game.
 
DLSS FG basically doesn't work with vsync since vsync destroys its framepacing and adds its own lag. With gsync and forced vsync you're getting automatic Reflex/driver limiter at some figure below your monitor maximum refresh which ensures that FG basically never hits vsync limit and in this case you get proper frame pacing. Without Gsync driver forced vsync locks the game at 1/2 refresh and uses FG to go up from there to vsync limit.


Well it depends on what you framerate is and what your monitor refresh is and if Gsync is working. You won't get tearing inside Gsync range with FG but you may get some at its edge and will get some outside of it without vsync.
So with a VRR monitor and gsync + FG you normally wouldn't get tearing unless your FPS goes higher than your refresh rate? And you have to driver enforce vsync to ensure no tearing if FPS > refresh rate?

This is incredibly complicated and I'm wondering why the optimal configuration is not the default configuration. Especially with MFG your framerate is almost guaranteed to exceed your refresh rate.
 
So with a VRR monitor and gsync + FG you normally wouldn't get tearing unless your FPS goes higher than your refresh rate? And you have to driver enforce vsync to ensure no tearing if FPS > refresh rate?

This is incredibly complicated and I'm wondering why the optimal configuration is not the default configuration. Especially with MFG your framerate is almost guaranteed to exceed your refresh rate.

When Digital Foundry looked into this at launch it was stated that Nvidia was looking into it but there cases where it cames issues in terms of pacing. I think someone brought up MSFS and that was the example in their video as well.


It would be interesting to see how the new pacing mechanisms for MFG could possiblity improve all this.

My feeling originally with the advent of frame generation was that it could possibly in the long run replace VRR, which would be in theory result in a better experience overall. Properly supporting VRR on the display side actually has a lot of minor nuances and caveats that when combined can lead to various issues. Overall it's better than no VRR but still ideally I would think have the output be consistent and high as overall being a more optimal solution.
 
So with a VRR monitor and gsync + FG you normally wouldn't get tearing unless your FPS goes higher than your refresh rate?
It's the same as without FG. You could get tearing since some frames may be faster than others and exceed maximum refresh. But generally you shouldn't if you're well within the gsync range.

And you have to driver enforce vsync to ensure no tearing if FPS > refresh rate?
Again, same as without FG. The only difference is that with FG you can't control vsync from inside a game, you have to force it in the driver.

Especially with MFG your framerate is almost guaranteed to exceed your refresh rate.
If you set it to more than 2X sure. But we have to wait and see how it's actually working.

My feeling originally with the advent of frame generation was that it could possibly in the long run replace VRR, which would be in theory result in a better experience overall.
I doubt that FG can replace VRR. You still need to sync the frame output to the display device. VRR is the lowest possible latency solution for that.
We will be getting 500Hz+ monitors soon though and it's an interesting question on whether these will even need to use vsync.
 
Reflex will do the limiting with FG since FG force enables Reflex. With vsync on on a Gsync monitor enabling Reflex will limit the fps below monitor's maximum refresh (116 for 120, 138 for 144, etc) to ensure that vsync rarely engages.
You can enable the same behavior from the driver though if you enable ultra low latency and then either force vsync there or enable it in game. This will also result in the same sub-refresh limit active on Gsync monitors.
NULL's limiter in the driver uses the same code as Reflex so this is something like forcing Reflex albeit less effective than when it is integrated into a game.

NULL and Reflex don't share the same code, but have the same intent. The problem is NULL doesn't work very well and Reflex does. Battle(non)sense tested this extensively, and the driver-based latency reduction methods basically suck compared to just frame-limiting your games so you don't hit 100% gpu. Reflex is the best in terms of working very well to reduce latency without having to do any frame capping. I would generally recommend everyone turn it on by default.

In terms of vsync it's better to use the Nvidia driver vsync if you're using gsync to get the correct double-buffered behaviour. I have it enabled globally and then turn it off for specific games.

Nvidia Control Panel V-SYNC vs. In-game V-SYNC

While NVCP V-SYNC has no input lag reduction over in-game V-SYNC, and when used with G-SYNC + FPS limit, it will never engage, some in-game V-SYNC solutions may introduce their own frame buffer or frame pacing behaviors, enable triple buffer V-SYNC automatically (not optimal for the native double buffer of G-SYNC), or simply not function at all, and, thus, NVCP V-SYNC is the safest bet.

There are rare occasions, however, where V-SYNC will only function with the in-game option enabled, so if tearing or other anomalous behavior is observed with NVCP V-SYNC (or visa-versa), each solution should be tried until said behavior is resolved.

 
I doubt that FG can replace VRR. You still need to sync the frame output to the display device. VRR is the lowest possible latency solution for that.
We will be getting 500Hz+ monitors soon though and it's an interesting question on whether these will even need to use vsync.

I should be more specific here but I'm referring to relying on VRR to handle frame rate (or frame time) fluctuations (drops).

At least I'm not understanding why in theory FG can't replace VRRs handling of that (other than maybe extreme edge cases). This frees up a lot of VRR related issues such as varying overdrive (and inconsistent response times), BFI issues, FALD local dimming issues, other sample and hold related issues (see that OLED discussion low fps issues), etc.

As an aside I wonder how many people look at the trade offs of essentially artifacts on the display end with respect to supporting VRR and low frame rates. Seems like a fare trade off for upscaler/frame gen artifacts if it mitigates display side artifacts.
 
Back
Top