Nvidia Blackwell Architecture Speculation

  • Thread starter Deleted member 2197
  • Start date
V-Sync on fixes that, but I guess it could use Reflex to do the limiting.
Reflex will do the limiting with FG since FG force enables Reflex. With vsync on on a Gsync monitor enabling Reflex will limit the fps below monitor's maximum refresh (116 for 120, 138 for 144, etc) to ensure that vsync rarely engages.
You can enable the same behavior from the driver though if you enable ultra low latency and then either force vsync there or enable it in game. This will also result in the same sub-refresh limit active on Gsync monitors.
NULL's limiter in the driver uses the same code as Reflex so this is something like forcing Reflex albeit less effective than when it is integrated into a game.
 
DLSS FG basically doesn't work with vsync since vsync destroys its framepacing and adds its own lag. With gsync and forced vsync you're getting automatic Reflex/driver limiter at some figure below your monitor maximum refresh which ensures that FG basically never hits vsync limit and in this case you get proper frame pacing. Without Gsync driver forced vsync locks the game at 1/2 refresh and uses FG to go up from there to vsync limit.


Well it depends on what you framerate is and what your monitor refresh is and if Gsync is working. You won't get tearing inside Gsync range with FG but you may get some at its edge and will get some outside of it without vsync.
So with a VRR monitor and gsync + FG you normally wouldn't get tearing unless your FPS goes higher than your refresh rate? And you have to driver enforce vsync to ensure no tearing if FPS > refresh rate?

This is incredibly complicated and I'm wondering why the optimal configuration is not the default configuration. Especially with MFG your framerate is almost guaranteed to exceed your refresh rate.
 
So with a VRR monitor and gsync + FG you normally wouldn't get tearing unless your FPS goes higher than your refresh rate? And you have to driver enforce vsync to ensure no tearing if FPS > refresh rate?

This is incredibly complicated and I'm wondering why the optimal configuration is not the default configuration. Especially with MFG your framerate is almost guaranteed to exceed your refresh rate.

When Digital Foundry looked into this at launch it was stated that Nvidia was looking into it but there cases where it cames issues in terms of pacing. I think someone brought up MSFS and that was the example in their video as well.


It would be interesting to see how the new pacing mechanisms for MFG could possiblity improve all this.

My feeling originally with the advent of frame generation was that it could possibly in the long run replace VRR, which would be in theory result in a better experience overall. Properly supporting VRR on the display side actually has a lot of minor nuances and caveats that when combined can lead to various issues. Overall it's better than no VRR but still ideally I would think have the output be consistent and high as overall being a more optimal solution.
 
So with a VRR monitor and gsync + FG you normally wouldn't get tearing unless your FPS goes higher than your refresh rate?
It's the same as without FG. You could get tearing since some frames may be faster than others and exceed maximum refresh. But generally you shouldn't if you're well within the gsync range.

And you have to driver enforce vsync to ensure no tearing if FPS > refresh rate?
Again, same as without FG. The only difference is that with FG you can't control vsync from inside a game, you have to force it in the driver.

Especially with MFG your framerate is almost guaranteed to exceed your refresh rate.
If you set it to more than 2X sure. But we have to wait and see how it's actually working.

My feeling originally with the advent of frame generation was that it could possibly in the long run replace VRR, which would be in theory result in a better experience overall.
I doubt that FG can replace VRR. You still need to sync the frame output to the display device. VRR is the lowest possible latency solution for that.
We will be getting 500Hz+ monitors soon though and it's an interesting question on whether these will even need to use vsync.
 
Reflex will do the limiting with FG since FG force enables Reflex. With vsync on on a Gsync monitor enabling Reflex will limit the fps below monitor's maximum refresh (116 for 120, 138 for 144, etc) to ensure that vsync rarely engages.
You can enable the same behavior from the driver though if you enable ultra low latency and then either force vsync there or enable it in game. This will also result in the same sub-refresh limit active on Gsync monitors.
NULL's limiter in the driver uses the same code as Reflex so this is something like forcing Reflex albeit less effective than when it is integrated into a game.

NULL and Reflex don't share the same code, but have the same intent. The problem is NULL doesn't work very well and Reflex does. Battle(non)sense tested this extensively, and the driver-based latency reduction methods basically suck compared to just frame-limiting your games so you don't hit 100% gpu. Reflex is the best in terms of working very well to reduce latency without having to do any frame capping. I would generally recommend everyone turn it on by default.

In terms of vsync it's better to use the Nvidia driver vsync if you're using gsync to get the correct double-buffered behaviour. I have it enabled globally and then turn it off for specific games.

Nvidia Control Panel V-SYNC vs. In-game V-SYNC

While NVCP V-SYNC has no input lag reduction over in-game V-SYNC, and when used with G-SYNC + FPS limit, it will never engage, some in-game V-SYNC solutions may introduce their own frame buffer or frame pacing behaviors, enable triple buffer V-SYNC automatically (not optimal for the native double buffer of G-SYNC), or simply not function at all, and, thus, NVCP V-SYNC is the safest bet.

There are rare occasions, however, where V-SYNC will only function with the in-game option enabled, so if tearing or other anomalous behavior is observed with NVCP V-SYNC (or visa-versa), each solution should be tried until said behavior is resolved.

 
I doubt that FG can replace VRR. You still need to sync the frame output to the display device. VRR is the lowest possible latency solution for that.
We will be getting 500Hz+ monitors soon though and it's an interesting question on whether these will even need to use vsync.

I should be more specific here but I'm referring to relying on VRR to handle frame rate (or frame time) fluctuations (drops).

At least I'm not understanding why in theory FG can't replace VRRs handling of that (other than maybe extreme edge cases). This frees up a lot of VRR related issues such as varying overdrive (and inconsistent response times), BFI issues, FALD local dimming issues, other sample and hold related issues (see that OLED discussion low fps issues), etc.

As an aside I wonder how many people look at the trade offs of essentially artifacts on the display end with respect to supporting VRR and low frame rates. Seems like a fare trade off for upscaler/frame gen artifacts if it mitigates display side artifacts.
 
NULL and Reflex don't share the same code, but have the same intent.
The driver side frame limiting code is the same.

The problem is NULL doesn't work very well and Reflex does. Battle(non)sense tested this extensively, and the driver-based latency reduction methods basically suck compared to just frame-limiting your games so you don't hit 100% gpu.
I've been using both for years and I don't see anything in NULL which "suck" compared to Reflex.

Reflex is the best in terms of working very well to reduce latency without having to do any frame capping. I would generally recommend everyone turn it on by default.
Reflex is doing frame capping. NULL and Reflex are doing the same thing - trying to avoid getting the GPU to 100% load. Reflex is just better at this because it has more knowledge of the engine it is working with.

I should be more specific here but I'm referring to relying on VRR to handle frame rate (or frame time) fluctuations (drops).
Framerate will never be 100% locked. There will always be drops and hitches whether due to performance or something else.
 
It's the same as without FG. You could get tearing since some frames may be faster than others and exceed maximum refresh. But generally you shouldn't if you're well within the gsync range.


Again, same as without FG. The only difference is that with FG you can't control vsync from inside a game, you have to force it in the driver.


If you set it to more than 2X sure. But we have to wait and see how it's actually working.


I doubt that FG can replace VRR. You still need to sync the frame output to the display device. VRR is the lowest possible latency solution for that.
We will be getting 500Hz+ monitors soon though and it's an interesting question on whether these will even need to use vsync.
The part I'm missing is why FG doesn't cause tearing always since it apparently disables vsync. Tearing doesn't only happen when fps > refresh rate. It also happens when fps < refresh rate. Does gsync function at all (sans driver override) when FG is turned on?

Thanks for bearing with me, I've become very confused.
 
@DegustatoR

One of the great gaming channels of youtube. I feel like they'd make a real good instructor/professor. They started with netcode but later in the channels run it veered into input latency. The videos on Nvidia Reflex pretty clearly show why it is superior to NULL.


NULL just doesn't work nearly as well as Reflex at minimizing latency. It actually doesn't even work as well as frame limiting. The result of the video is pretty straightforward. Enable Reflex and stop worrying about having to use frame limiters. It works a lot better.

This is the great video that started it all, and I still use this method and set a frame rate limit if a game doesn't support Reflex. I keep the low latency mode in the drivers off.

Edit: To add to this, Nvidia DLSS-FG should ALWAYS be compared against configurations running Reflex. It should be FG OFF + Reflex ON vs FG ON + Reflex ON. Reflex is just a huge latency advantage, and it's easy to get caught. I've seen youtube content making this mistake, where they're playing cyberpunk with an uncapped framerate and the latency is terrible, and then they turn DLSS and FG on and latency looks pretty similar, but that's because enabling FG turned Reflex on. Turn Reflex on, test and then enable FG to compare.

Reflex is probably the most underrated software Nvidia has come out with. I see content creators that don't understand it doing frame rate comparisons with Reflex off and on without mentioning or measuring latency at all. On its own it is reason enough for me to buy an Nvidia GPU until Radeon Anti-Lag+ (or whatever it's called now since re-launch) is widely supported.
 
Last edited:
this guy talks about what seems to be the future. Asynchronous reprojection, Reflex 2... He explains it in thorough detail.


Asynchronous Reprojection as a mod. If I understand how it works, i.e. if you have a 540Hz monitor and you are running a game at 30fps, it basically smooths out the framerate so the game runs at 540fps. IT has some limitations, so it needs Frame Generation in some instances, but kinda promising for nVidia to start working on using AI with it.

With Asynchronous Reprojection, even if you are running the game internally at 30fps the input lag is flawless.


About
Asynchronous Reprojection creates second rendering context to asynchronously reproject frames from main rendering thread with new camera rotation and player position. It can be used to smooth out the frame rate.
 
Last edited:
The part I'm missing is why FG doesn't cause tearing always since it apparently disables vsync. Tearing doesn't only happen when fps > refresh rate. It also happens when fps < refresh rate. Does gsync function at all (sans driver override) when FG is turned on?
Tearing below vsync when Gsync is active is generally rare and happens only when you're close enough to maximum refresh for some frames to exceed it. FG isn't any different to non-FG in this.

NULL just doesn't work nearly as well as Reflex at minimizing latency. It actually doesn't even work as well as frame limiting. The result of the video is pretty straightforward.
It doesn't work as well but it works fine. Not all in-game frame limiting will give you better results than NULL since their implementations can be different.

This is the great video that started it all, and I still use this method and set a frame rate limit if a game doesn't support Reflex. I keep the low latency mode in the drivers off.
I use NULL almost all the time as I don't see any benefit to experimenting with in-game limiters over it. NULL doesn't work in Reflex titles though so there you have to use Reflex.
 
Maybe they could work out something like 2x for Ampere and 3x for Ada?

The performance requirement isn’t based on the number of interpolated frames. As mentioned in the DF interview generating the intermediate frames is the “easy” part. The hard part is figuring out what how things shifted between the two rendered frames. Hopefully they can make the new “AI based optical flow” work on slower cards.
 
compare1.jpg

compare2.jpg

compare3.jpg

 
Pretty outstanding. I do hope this trickles down to the <5090 tiers.

I am getting some weird déjà vu though. Competing with your customers is what killed 3dfx.
 
Back
Top