Nvidia DLSS 3 antialiasing discussion

Yep DLSS3 adds latency and smooths motion and the combination seems completely pointless to me. The benefit is smoother motion and this benefit is most needed at lower frame rates. However lower frame rates mean higher latency and this is worst time to add even more latency. Really looking forward to the DF analysis on this. Not sure what problem this is solving.


There is a noticeable difference in motion sharpness between 240Hz and 120Hz as well as an improvement in image smoothness. I've been using a 240Hz display for a while and the difference between 240 fps and 120 fps is noticeable in first or third person games where you control the camera. The next big perceivable jump from 240Hz is probably 480Hz and it's just unrealistic for cpu, memory and gpu to scale to reach that in the near future. WIth frame generation of sufficient quality you can get there with some trade offs. 1080p500Hz displays with gsync modules should be out before the end of the year.
 
Lower frames also mean larger deltas in information, so more artifacts and longer durations to notice those artifacts. It'll work best in super high frame rate esports titles, but those aren't the sorts of titles that people want to be sacrificing latency for smoke and mirror visual fluidity.

Right and at super high frame rates motion is plenty smooth already. The benefit of high refresh monitors for eSports is that you get to see gameplay updates earlier. DLSS3 does the exact opposite.
 
Q&A with Nvidia here:
but 1/2 frame sort of still tells me that Optical Flow is being used as extrapolation and not interpolation. If it was a interpolation latency will be 1.5 frames behind. Not 0.5 frames behind.
For eSports users who might want to achieve the lowest latency, will it be possible to only enable NVIDIA DLSS 2 (Super Resolution) + Reflex without the DLSS Frame Generation that improves FPS but also increases system latency?

Our guidelines for DLSS 3 encourage developers to expose both a master DLSS On/Off switch, as well as individual feature toggles for Frame Generation, Super Resolution, and Reflex, so users have control and can configure the exact settings they prefer. Note that while DLSS Frame Generation can add roughly a half frame of latency, this latency is often mitigated by NVIDIA Reflex and DLSS Super Resolution - all part of the DLSS 3 package. NVIDIA Reflex reduces latency by synchronizing CPU and GPU, and DLSS Super Resolution reduces latency by decreasing the size of the render resolution.


Optical Flow can do both.
OF_SDK_001b.png
 
Last edited:
Really curious to see what frame generation looks like when DLSS Super Resolution is disabled. I have games like fortnite where I can hit well over 200fps without DLSS at the settings I use. A 500Hz display with generation from native frames would be very interesting.

240Hz is 4.2 ms per frame, so half a frame (2.1 ms) plus the time for frame generation. Really comes down to how long generation takes.
 
Q&A with Nvidia here:
but 1/2 frame sort of still tells me that Optical Flow is being used as extrapolation and not interpolation. If it was a interpolation latency will be 1.5 frames behind. Not 0.5 frames behind.
For eSports users who might want to achieve the lowest latency, will it be possible to only enable NVIDIA DLSS 2 (Super Resolution) + Reflex without the DLSS Frame Generation that improves FPS but also increases system latency?

Our guidelines for DLSS 3 encourage developers to expose both a master DLSS On/Off switch, as well as individual feature toggles for Frame Generation, Super Resolution, and Reflex, so users have control and can configure the exact settings they prefer. Note that while DLSS Frame Generation can add roughly a half frame of latency, this latency is often mitigated by NVIDIA Reflex and DLSS Super Resolution - all part of the DLSS 3 package. NVIDIA Reflex reduces latency by synchronizing CPU and GPU, and DLSS Super Resolution reduces latency by decreasing the size of the render resolution.

Yeah, so latency is worse than DLSS 2.0 + reflex. I honestly see zero use for this tech in games as it makes the gameplay experience worse, unlike DLSS 2.0. But perhaps you can argue that DLSS 3.0 provides better latency than native without DLSS or reflex. In some ways it’s akin to enabling motion smoothness in TVs to achieve a smoother looking result with more processing time, but obviously this will provide much superior results.
 
Yeah, so latency is worse than DLSS 2.0 + reflex. I honestly see zero use for this tech in games as it makes the gameplay experience worse, unlike DLSS 2.0. But perhaps you can argue that DLSS 3.0 provides better latency than native without DLSS or reflex.
Yea it's definitely worse than DLSS2.0 + reflex, it's not free. But it is better than native, which couldn't be possible with traditional interpolation.
 
I also wonder how this deals with large deltas in highly inconsistent frame time delivery. Does it produce noticeably worse results if you get framerate spike and the delta between the two -real- frames grows bigger?
 
This one of those features that's going to again result in some polarizing opinions depending on what one values in terms of gaming experience in terms of high temporal resolution. Those that approach gaming from a 60fps/hz stand point as "optimal" on single player games will get much less value from this than those hoping for 120fps/hz or higher.

It'll also be interesting to see how well this plays with v-sync to possibly use in conjunction with BFI displays.
 
This is how I understand it to work. It uses the current and previous frame to generate an intermediate frame. Your increase in input lag is the time added to the render queue to generate frames plus the delay in presenting the current frame.

View attachment 7045
If that's correct, how on earth are they f'in up even UI elements if you've already rendered frames on both sides of the equation? And wouldn't interpolation (ai assisted if you so please) be both easier and better quality? They're using it in video equivalent of frame gen
 
Extrapolating frames sounds like it'd be a headache. VR is able to get away with it because the VR runtime handles the input polling and frame buffer, so it's drawing an overscanned frame that it can window around inside and reproject according to an extrapolated future head pose which can reliably be predicted because the human head has a large amount of inertia that puts a lower and upper bound on where it'll be 10-20ms into the future. For a regular old PC game controlled by a 50g mouse I'd think you'd end up feeling weird undershooting and overshooting of the expected movement whenever accelerating and decelerating, not to mention the encroaching occlusion of the window frame would mean seeing artifacts on the edge you're turning towards.
 
Its a throwback to when DLSS1 launched alongside Turing architecture (and ray tracing somewhat). The amount of negativety was even more than it is now. I'd say wait for the reviews and the coming months how the new technology develops. It sure is an intresting development alongside ray tracing. We really need new technologies since raster is getting abit old-school by now.
 
Extrapolating frames sounds like it'd be a headache. VR is able to get away with it because the VR runtime handles the input polling and frame buffer, so it's drawing an overscanned frame that it can window around inside and reproject according to an extrapolated future head pose which can reliably be predicted because the human head has a large amount of inertia that puts a lower and upper bound on where it'll be 10-20ms into the future. For a regular old PC game controlled by a 50g mouse I'd think you'd end up feeling weird undershooting and overshooting of the expected movement whenever accelerating and decelerating, not to mention the encroaching occlusion of the window frame would mean seeing artifacts on the edge you're turning towards.
There is virtually no reason to use ML for interpolation of barely changed frames. For hand drawn animation it makes sense when frame rates are in the low teens, you’d need ML for blending such large gaps. But if you’re going to have the previous and current frame, we have a variety of algorithms that can do it. And the end result is still 1.5 frame latency. At absolute best. It doesn’t line up with what Nvidia is saying here imo.

There will be extrapolation mistakes but they are smaller the higher the resolution and the higher the frame rate.
 
What sort of algorithm do TVs use for motion smoothing? My understanding is they tend to go hand-in-hand with a lot of latency, so presumably they're utilizing a lot more than just 2 adjacent frames, and perhaps aren't simply interleaving raw frames with synthesized ones, but creating an entirely new sequence?
 
If that's correct, how on earth are they f'in up even UI elements if you've already rendered frames on both sides of the equation? And wouldn't interpolation (ai assisted if you so please) be both easier and better quality? They're using it in video equivalent of frame gen

If they don't have a mask for the ui, the combined motion vectors plus optical flow could suggest pixels move into the region of the ui elements. Not sure why they don't just use frames that don't have the ui drawn (that could be an option?), and then generate the frame and overlay the ui after. Not sure of the technical reasons.
 
There is virtually no reason to use ML for interpolation of barely changed frames. For hand drawn animation it makes sense when frame rates are in the low teens, you’d need ML for blending such large gaps. But if you’re going to have the previous and current frame, we have a variety of algorithms that can do it. And the end result is still 1.5 frame latency. At absolute best. It doesn’t line up with what Nvidia is saying here imo.

There will be extrapolation mistakes but they are smaller the higher the resolution and the higher the frame rate.

If it's extrapolating it could perhaps order the frames in a way where the tensor cores are busy generating the current frame (using previous frame information) at the same time as the rest of the GPU is working on the next frame, and the half frame latency comes down to presenting/processing etc.
 
What sort of algorithm do TVs use for motion smoothing? My understanding is they tend to go hand-in-hand with a lot of latency, so presumably they're utilizing a lot more than just 2 adjacent frames, and perhaps aren't simply interleaving raw frames with synthesized ones, but creating an entirely new sequence?

If I had to guess tvs can also spend a lot more time processing video because the watcher is not providing any inputs. They can just delay the audio to match the frames. Pretty sure lots of receivers have the option to add audio delay for this specific reason.
 
If they don't have a mask for the ui, the combined motion vectors plus optical flow could suggest pixels move into the region of the ui elements. Not sure why they don't just use frames that don't have the ui drawn (that could be an option?), and then generate the frame and overlay the ui after. Not sure of the technical reasons.

They likely want to completely avoid the Game < - > DLSS sync for the generated frame to not induce additional latency on top of it.
 
I could stomach seeing the health bar wiggle in response to movement on the screen from time to time, but the idea of the crosshairs shifting away in response to an enemy moving towards them is kind of amusing. Something like that I'd think you would want to explicitly train and/or mask out, because something that's pixel-thick could easily be overpowered by large moving objects?
 
What sort of algorithm do TVs use for motion smoothing? My understanding is they tend to go hand-in-hand with a lot of latency, so presumably they're utilizing a lot more than just 2 adjacent frames, and perhaps aren't simply interleaving raw frames with synthesized ones, but creating an entirely new sequence?

I just double checked and @Dictator said he's going to compare DLSS3 frame generation to offline interpolation tools, so we should get a pretty good idea of what the differences are between what a tv could do vs DLSS3. Maybe tvs aren't even as good as the offline tools. Don't know. TV smoothing generally does have artifacts.
 
Back
Top