Nvidia DLSS 3 antialiasing discussion

The lower the framerate the higher the latency impact.

Image based rendering for GPU bound VR games makes most sense, but this really has to be done with the help of the engine. VR especially can not abide this latency.
Cyberpunk has 160ms system latency. So i dont think that additional 25ms is a huge problem...
 
Thank you. That confirms that the generated frame is presented before the last rendered frame. I don’t see the point besides pushing nonsense fps graphs.
No it doesn't. The two frames they're referring to are both previous frames and used by OFA. The rest uses one frame + it's motion vectors.
From NVIDIA:
The DLSS Frame Generation convolutional autoencoder takes 4 inputs – current and prior game frames, an optical flow field generated by Ada’s Optical Flow Accelerator, and game engine data such as motion vectors and depth.

Ada’s Optical Flow Accelerator analyzes two sequential in-game frames and calculates an optical flow field. The optical flow field captures the direction and speed at which pixels are moving from frame 1 to frame 2. The Optical Flow Accelerator is able to capture pixel-level information such as particles, reflections, shadows, and lighting, which are not included in game engine motion vector calculations.
The 'current frame' is already previous when the generated is displayed.

Can't seem to copy/paste image from NVIDIAs article on my phone, but they even clearly list the frame pairs as 1 scaled, 2 generated, 3 scaled, 4 generated
 
No it doesn't. The two frames they're referring to are both previous frames and used by OFA. The rest uses one frame + it's motion vectors.
From NVIDIA:

The 'current frame' is already previous when the generated is displayed.

Can't seem to copy/paste image from NVIDIAs article on my phone, but they even clearly list the frame pairs as 1 scaled, 2 generated, 3 scaled, 4 generated

No, the language is extremely clear. Nvidia says the generated frame “transitions between” the two input frames. There is no ambiguity there.
 
No, the language is extremely clear. Nvidia says the generated frame “transitions between” the two input frames. There is no ambiguity there.
If they had the next frame to work with too, can you give a plausible explanation for how even UI elements, which don't change between two frames, get messed up?
I need to ask if we got all the session data saved up, but the publicly available data is quite different from that claim.
 
If they had the next frame to work with too, can you give a plausible explanation for how even UI elements, which don't change between two frames, get messed up?
I need to ask if we got all the session data saved up, but the publicly available data is quite different from that claim.
Why would UI elements be involved? Is this method actually using the final frames after post-process and overlays, what is sent to the frame buffer for display? If so, that's even more useless.
 
Why would UI elements be involved? Is this method actually using the final frames after post-process and overlays, what is sent to the frame buffer for display? If so, that's even more useless.
That's how it looks, at least Spiderman exhibits this (linked post from Ada-thread in one my earlier posts)
 
The bigger problem compared to just latency is the fact that generated doesn't have any user input, so your inputs only affect every second frame.
I really don't think that's going to be a massive deal. The types of games that this truly matters in... will all already perform incredibly without the need for DLSS3 and fake frames.

There's real potential here at the 60-120fps, or better yet 80 to 160fps ranges. Those would likely be perfect sweet-spots for this technology. At 60-80fps input is already quite responsive. I doubt in all but very rare edge cases are you really going to feel any noticeably lag increase.. yet on the flip-side you're definitely going to notice smoother animation and movement which comes from 100+ fps. The twitch based fast action shooters and fighting games are all going to run and perform beautifully regardless because they're designed for that performance. All the competitive players turn down every setting for the best performance anyway. There's not going to be competitive games which the high end 40-series cards doesn't completely smash anyway.

Most gamers already don't realize it but input lag is constantly changing on PC when playing with unlocked framerates and utilizing VRR. When you're up at 130fps and then it drops down to 70fps for a few seconds, and then back up and all the way in between.. As long as it stays above 60.. they generally don't notice it.. because it still feels extremely responsive, and usually when they DO notice it's because there's a visual cue. They can see the framerate dipping.. and that cues them to feel that the game is a bit more sluggish. With DLSS3, the input lag might increase, but the player isn't going to immediately notice 1 frame of lag because it's still being presented completely smooth.

I play games without reflex just perfectly fine. I think if reflex mitigates that slight input lag penalty, while presenting the game in a much more silky smooth matter (which 120fps certainly does compared to 60fps) then this will be a game changer.
 
Presumably this also allows the generation of new animation steps that wouldn't exist with native rendering - even if native was running at the same fps as DLSS 3. So potentially more fluid animations too?
 
Presumably this also allows the generation of new animation steps that wouldn't exist with native rendering - even if native was running at the same fps as DLSS 3. So potentially more fluid animations too?
Goodbye dodgy quarter frame rate anm on far away stuff
 
How sure are we that these artifacts are not a result of YouTube video compression?
Wouldn't they apply to more of the frames, like back to back?

If it is YT Compression, then it would be in nVidia's best interest to host uncompressed or at least artifact free marketing materials.
 
How sure are we that these artifacts are not a result of YouTube video compression?

Timestamped. Play this at 0.25x speed and look at Spider-mans wrist as he is flying against the blue sky. These artifacts are definitely not Youtube compression. Only the DLSS 3 is generating them.

Playing back at normal speed, they are definitely not very noticeable.

edit: Ah damn missed the context was about UI. UI element "A" at 1:32 seems to flicker too when playing back at low speed. Definitely not looking like video compression issue.
 
Last edited:
I’m waiting for confirmation on this point. It’s not 100% clear whether generated frames are inserted before or after the last rendered frame. If it’s after then DLSS3 makes more sense to me. If it’s before it seems pretty useless. In that scenario going from 60 fps to 120 fps with DLSS3 will do nothing for fluidity of gameplay.
Pretty sure it’s after. Don’t need ML for prior frames.
 
Back
Top