Nvidia DLSS 3 antialiasing discussion

personally I would rather have NVidia or AMD work with the engine dev's so that the deep learning is done by the user land shader code rather then some black box driver call. bc not only is it more performant and more accurate, but it also opens the door for things like dynamic resolution and/or dynamic frame interpolation based on the current workload. it would also be easier to rollback miss-predicted frames or roll back partial miss-predicted frames!
 
Well DLSS 1 wasn't very good either but it was a necessary step to work towards what we have now, which is quite good.
The necessary step being that the engine really needs to be involved.

Backards motion vectors were always there though, so it was easy enough to pass those on. There's no easy way to get forward motion vectors without full AI+physics+animation for the forward frame. Time for more sanity in the C++ spaghetti code to make games more GPU limited.
 
Is it though, when it repeats literally every second frame?
If it was the case, you would see flickering, which I have not seen in any of the DLSS 3 videos.

And why would that be the case at all?

Typical video artefacts usually happen when some movement directions were predicted erroneously. This results into some macroblocking when blocks of pixels end up in wrong areas.

Given that the frame generator network works with just 2 neighbor frames, the artifacts can't stuck for more than 1 frame, so the artifacts will appear and dissapear for a very short periods of time and only in certain frames.

The point is you have to search for such artifacts by rewinding videos back and forth and specifically looking for corruptions in stop frames, which is not how people actually play games.

That's the main difference with temporal accumulation where once some ghosting happen it tends to stuck for many frames due to recursive nature of adding pixels into the history buffer.
 
If it was the case, you would see flickering, which I have not seen in any of the DLSS 3 videos.

And why would that be the case at all?

Typical video artefacts usually happen when some movement directions were predicted erroneously. This results into some macroblocking when blocks of pixels end up in wrong areas.

Given that the frame generator network works with just 2 neighbor frames, the artifacts can't stuck for more than 1 frame, so the artifacts will appear and dissapear for a very short periods of time and only in certain frames.

The point is you have to search for such artifacts by rewinding videos back and forth and specifically looking for corruptions in stop frames, which is not how people actually play games.

That's the main difference with temporal accumulation where once some ghosting happen it tends to stuck for many frames due to recursive nature of adding pixels into the history buffer.
Frame generation works with previous frame(s) only, not neighbouring ones.

It is every second frame because every 2nd frame is scaled and every 2nd predicted.
 
Would image quality of generated frames worsen the lower the base framerate is because the difference between actual rendered frames is larger and there's more of a gap to fill? (By base framerate I mean what we are getting with DLSS 2 and not native - so focusing on frame generation process only here).

If so, then optimal use case would be to jump from high framerates to very high framerates, say 80 -> 160. And image quality would be comparatively worse when jumping from low framerates to acceptable framerates, say 30 -> 60?
 
It works with the latest available frame and previous one and generates a frame in-between of them. The latest available frame and previous one are neighbour ones.
They're previous frame(s) when you display the predicted one.
 
Would image quality of generated frames worsen the lower the base framerate is because the difference between actual rendered frames is larger and there's more of a gap to fill?

If so, then optimal use case would be to jump from high frameraes to very high framerates, say 80 -> 160. And image quality would be comparatively worse when jumping from low framerates to acceptable framerates, say 30 -> 60?
That's exactly what I was thinking.
 
If it's not obvious yet DLSS3 results should be content dependent.
I see it working fine in something like MSFS but less so in a shooter.
It is certainly a stretch to claim that it will be "artifact city" prior to even seeing any proper review.
 
They're previous frame(s) when you display the predicted one.
Of course they are not, frames are presented as follows - previous frame --> interpolated one --> latest rendered frame. What you're impling is called extrapolation, that's not the case here.

Losing bit parts of building and other objects, messing the UI and everything related to the player character itself vs that?. Agree to disagree.
Unless that's visible and distracting in motion, who cares? Frame generation is all about how gameplay looks in dynamic. It does not matter much if frame generation does not always look good on stop frames from video because you would not be able to distinguish these artifacts during gameplay anyway.
 
Of course they are not, frames are presented as follows - previous frame --> interpolated one --> latest rendered frame. What you're impling is called extrapolation, that's not the case here.

Whoa DLSS3 is interpolation? From the marketing materials I thought it was forward prediction. This is much less impressive.

What’s the point of inserting a fake frame before an already rendered frame. It will only increase latency for no benefit. Maybe it helps smooth motion at super low frame rates but even then the AI/physics/input loop will still “feel” slow.
 
Of course they are not, frames are presented as follows - previous frame --> interpolated one --> latest rendered frame. What you're impling is called extrapolation, that's not the case here.


Unless that's visible and distracting in motion, who cares? Frame generation is all about how gameplay looks in dynamic. It does not matter much if frame generation does not always look good on stop frames from video because you would not be able to distinguish these artifacts during gameplay anyway.
No, they're not and there's no interpolation at play.
Every second frame is generated with neural net based on previous (latest rendered) frame, it's motion vectors and OFA looking at motion in last 2 frames. Every other frame is scaled normally with DLSS 2.x. And there's no buffer for frames, they disable pre-rendered frame queue
 
Whoa DLSS3 is interpolation? From the marketing materials I thought it was forward prediction.
Here are the marketing materials:
"For each pixel, the DLSS Frame Generation AI network decides how to use information from the game motion vectors, the optical flow field, and the sequential game frames to create intermediate frames."

What’s the point of inserting a fake frame before an already rendered frame. It will only increase latency for no benefit.
The frame generation itself will increse latency ever so slightly and this is why it's bundled with Reflex so that it can reduce the latency significantly by getting rid of the frame queue between the CPU and GPU. Add the DLSS super resolution into the equation, and all combined will have way lower latency in comparison with native rendering.
The benefit of frame generation is more visually fluid gameplay.

Maybe it helps smooth motion at super low frame rates but even then the AI/physics/input loop will still “feel” slow.
I guess we must wait for DF's and other reviews for more feedback on this.
 
Here are the marketing materials:
"For each pixel, the DLSS Frame Generation AI network decides how to use information from the game motion vectors, the optical flow field, and the sequential game frames to create intermediate frames."
Intermediate frames between scaled ones (like said every second frame is scaled every other generated) but it doesn't mean it's intermediate between already rendered frames.

There marketing material you linked is very clear, and it shouldn't be that hard to understand.
There's no interpolation, no future frames or anything like that. Generated frames are based on previous frame, it's motion vectors and OFA looking at 2 previous frames to fix the motion vectors (only thing not clear is if it's 2 scaled frames or literally 2 frames so 1 scaled 1 generated)
 
The frame generation itself will increse latency ever so slightly and this is why it's bundled with Reflex so that it can reduce the latency significantly by getting rid of the frame queue between the CPU and GPU. Add the DLSS super resolution into the equation, and all combined will have way lower latency in comparison with native rendering.
The benefit of frame generation is more visually fluid gameplay.
So higher latency when comparing DLSS + Reflex + frame generation to DLSS + Reflex only? (This is the only reasonable comparison imo.)
 
So higher latency when comparing DLSS + Reflex + frame generation to DLSS + Reflex only? (This is the only reasonable comparison imo.)
The bigger problem compared to just latency is the fact that generated doesn't have any user input, so your inputs only affect every second frame.
 
Back
Top