Nvidia DLSS 3 antialiasing discussion

DLSS 3 doesn't use 2 previous frames to generate a new frame. It generates an intermediate frame between 2 rendered frames.

As you pan right, it doesn't need to generate stuff out of thin air because the latest rendered frame is a step ahead already and that information will be used in the frame generation process.
OK, that makes more sense. And yeah, I can see why people are talking about added latency. Finished frame is stuck in a queue while generated is shown.

Although not many games ship with perfect buffer structure, having reflex built in into the DLSS3 will be an overall gain for PC gaming.
 
Is there any information on how this frame generation will work in conjucgtion with G-Sync/Freesync or with framerates close to (or avove) a V-synced monitor limit? Do you risk dropping "real" frames and displaying a series of generated ones, depending on the frame pacing limits of your display?
 
There's no way this tech can exist if it only works on the final frame buffer. I assume it works at the same stage as DLSS, where final overlays are added before sending to the display.

So the HUD, 2D overlays, and post-process effects would be updating at native (1/2) rate then? Or is the application still actually time-stepping the simulation, updating those elements and passing them forward to be composited? If the generated frame were simply reusing the same overlay as one of the adjacent frames that might introduce some weird judder/desync for 2D elements that track with 3D elements. RTS/strategy, mobaas, mmorpgs, 4X, etc where units have overlayed reticles, highlight outlines, health bars I'd expect to see those elements snapping, trailing, snapping, trailing every other frame.
 
So the HUD, 2D overlays, and post-process effects would be updating at native (1/2) rate then? Or is the application still actually time-stepping the simulation, updating those elements and passing them forward to be composited? If the generated frame were simply reusing the same overlay as one of the adjacent frames that might introduce some weird judder/desync for 2D elements that track with 3D elements. RTS/strategy, mobaas, mmorpgs, 4X, etc where units have overlayed reticles, highlight outlines, health bars I'd expect to see those elements snapping, trailing, snapping, trailing every other frame.
Yeah I'm not sure, after thinking about I can't imagine this being able to generate frames prior to the last compositing in the engine.
 
So they didn't give specific fps figures here and they didn't directly compare 3090Ti to 4090 with identical settings anywhere, but a little bit of math shows that in the basic Portal RTX hall scene they were showing - the 4090 would be ~85% faster than the 3090Ti both using DLSS2.

For what it's worth.

Also, given they're requiring the use of Reflex, I suppose it'll mean a lot more games adopting Reflex as well in order to support DLSS3. Another sneaky way of boosting their proprietary software adoption. There's negligible input lag increase going from DLSS 2 -> 3 w/Reflex on on both(at least shown here), so it suggests the actual increase might not be that significant in reality? Though this stuff will be situational.
 
Last edited:
Also, given they're requiring the use of Reflex, I suppose it'll mean a lot more games adopting Reflex as well in order to support DLSS3. Another sneaky way of boosting their proprietary software adoption.

So Nvidia reflux won't work on non-Nvidia GPUs? Is there no way of achieving similar latency minimization using standard APIs? I looked through their press material but left with wondering why it won't work with every GPU seeing how it's touted to work all the way back on GTX 900 series.
 
So Nvidia reflux won't work on non-Nvidia GPUs? Is there no way of achieving similar latency minimization using standard APIs? I looked through their press material but left with wondering why it won't work with every GPU seeing how it's touted to work all the way back on GTX 900 series.

As far as I'm aware the other gpu vendors do not offer an alternative to reflex.
 
Just watching this video now. It’s very interesting how in the video, there’s very little camera panning. It’s basically a 31 minute ad sponsored by Nvidia. I do appreciate Alex touching on the artifacts because it was noticeable.
It's incredible how we've gone from screaming murder for image quality affecting optimizations to going to deep end trying to make excuses why they're ok
 
Something puzzles me. Does DLSS 3 not have quality settings? They always match it up against DLSS 2 Performance so does that mean DLSS 3 defaults to Performance as well? Or will it be available in Quality, Balanced, and Ultra Performance modes?

Edit: Oh wait, based on the video it seems you toggle a DLSS with its quality setting then if you enable Frame Generation, it's effectively DLSS 3 because it also toggles Reflex by default.
 
Well by the same token, the Nvidia marketing crew here never gives up either.
That's just it. This is a Nvidia sponsored pre-release video so what did you expect. When the release day final game build videos are released by all tech sites that is what our interpretation on their observations counts.
 
watching this video now. It’s very interesting how in the video, there’s very little camera panning. It’s basically a 31 minute ad sponsored by Nvidia. I do appreciate Alex touching on the artifacts because it was noticeable.
Panning is not hard for Motion interpolators to do, even terrible TV Set ones. Why are you interested in seeing that easy use case? Complex fast motion and 3D transforms (skinned animation) is the hard stuff.
 
I have seen several artifacts in that video Alex missed, but no doubt once they release their indepth analysis they will acknowledge and cover it.
 
Back
Top