Nvidia DLSS 3 antialiasing discussion

Add the DLSS super resolution into the equation, and all combined will have way lower latency in comparison with native rendering.
This, DLSS3 is just an added mode on top of DLSS2, it benefits from the reduction of latency gained by DLSS2, and also benefits from latency reduction from Refelx.

So, it will probably look like this:

1080p native latency: 9ms
2160p native latency: 20ms
1080p to 2160p DLSS2 upscale latency: 10ms
1080p to 2160p DLSS3 frame generation latency: 12ms
1080p to 2160p DLSS3 + Reflex latency: 11ms
 
Maybe, but I'm just trying to understand the basics first since there's some conflicting statements about this. "No latency."

Shouldn't be maybe, the extra frame is solely from the GPU with no CPU involvement, so there can't be new inputs can there?

So, it will probably look like this:

1080p native latency: 9ms
2160p native latency: 20ms
1080p to 2160p DLSS2 upscale latency: 10ms
1080p to 2160p DLSS3 frame generation latency: 12ms
1080p to 2160p DLSS3 + Reflex latency: 11ms
That doesn't take into account that generated frames don't have user inputs, thus they're really just like pure added latency
 
That doesn't take into account that generated frames don't have user inputs, thus they're really just like pure added latency
No different than pre rendered frames, the user never actually has control on every single frame, just every "sum" of frames, which varies according to the animation and scene composition.

See Red Dead Redemption 2 for a clear example of this.
 
And there's no buffer for frames, they disable pre-rendered frame queue
Gosh, pre-rendered frame queue and frames buffering (double, triple with v-sync or any other) prior to displaying things are totally different things, and you somehow mixed them all :)

There marketing material you linked is very clear, and it shouldn't be that hard to understand.
Agree, it is very clear and easy to grasp ;)

So higher latency when comparing DLSS + Reflex + frame generation to DLSS + Reflex only?
Yes, all things equal DLSS 2.0 + Reflex only will be better latency wise.

The bigger problem compared to just latency is the fact that generated doesn't have any user input
Which isn't a problem at all because the pre-rendered frame queue you've mentioned already captures input for up to 4 frames, by getting rid of it there are already huge latency savings and DLSS's lower resolution rendering decreases latencies even futher. Have you ever seen end-to-end input latency numbers? They are not 16 ms in your typical 60 FPS game, but rather close to 80 ms, what a shock!)

1080p native latency: 9ms
2160p native latency: 20ms
1080p to 2160p DLSS2 upscale latency: 10ms
1080p to 2160p DLSS3 frame generation latency: 12ms
1080p to 2160p DLSS3 + Reflex latency: 11ms
Yes, numbers may vary of course, but in general you are right.
 
Gosh, pre-rendered frame queue and frames buffering (double, triple with v-sync or any other) prior to displaying things are totally different things, and you somehow mixed them all :)
double triple etc buffering are completely separate from any of this, seriously didn't think you would mean those. My English isn't native so I'll gladly try to clarify if something I write might be misunderstood
 
So, like it is today? Input latency is not bind to render time.
No, not like today. Even when input latency isn't bind to render time, your inputs affect every frame. With Frame Generation they only affect every 2nd frame.
 
Maybe, but I'm just trying to understand the basics first since there seems to be some conflicting statements about this. "No latency."
It seems there is confusion when it comes to the word "interpolation". Some people think of it as of method of generating the final frame, i.e. as if it was done with linear combination of 2 frames like on some TVs, while other people, me including, refer to it as of concept of generating intermediate frames. Methods may vary of course, in case of DLSS 3 that's optical flow + ingame motion vectors + neural net that does the heavy lifting of generating the new intermediate frame.
 
double triple etc buffering are completely separate from any of this, seriously didn't think you would mean those
I mentioned double and triple frame buffering because this is exactly the type of buffering that would be required for a frame generation algorithm that attempts to insert frames in-between of other frames (since it would obviously need to hold on the latest rendered frame). Pre-rendered queue is a different story altogether - that's about buffering CPU commands and data for follow up frames.
 
Intermediate frames between scaled ones (like said every second frame is scaled every other generated) but it doesn't mean it's intermediate between already rendered frames.

I’m waiting for confirmation on this point. It’s not 100% clear whether generated frames are inserted before or after the last rendered frame. If it’s after then DLSS3 makes more sense to me. If it’s before it seems pretty useless. In that scenario going from 60 fps to 120 fps with DLSS3 will do nothing for fluidity of gameplay.
 
The lower the framerate the higher the latency impact.

Image based rendering for GPU bound VR games makes most sense, but this really has to be done with the help of the engine. VR especially can not abide this latency.
 
Last edited:
Back
Top