Thank you. That confirms that the generated frame is presented before the last rendered frame. I don’t see the point besides pushing nonsense fps graphs.
So we need a new benchmark metric. RealFrames™ per second, and StatPadFrames™ per second.
Thank you. That confirms that the generated frame is presented before the last rendered frame. I don’t see the point besides pushing nonsense fps graphs.
Cyberpunk has 160ms system latency. So i dont think that additional 25ms is a huge problem...The lower the framerate the higher the latency impact.
Image based rendering for GPU bound VR games makes most sense, but this really has to be done with the help of the engine. VR especially can not abide this latency.
The 0 moves?Finally understood what half frame latency means... View attachment 7042
No it doesn't. The two frames they're referring to are both previous frames and used by OFA. The rest uses one frame + it's motion vectors.Thank you. That confirms that the generated frame is presented before the last rendered frame. I don’t see the point besides pushing nonsense fps graphs.
The 'current frame' is already previous when the generated is displayed.The DLSS Frame Generation convolutional autoencoder takes 4 inputs – current and prior game frames, an optical flow field generated by Ada’s Optical Flow Accelerator, and game engine data such as motion vectors and depth.
Ada’s Optical Flow Accelerator analyzes two sequential in-game frames and calculates an optical flow field. The optical flow field captures the direction and speed at which pixels are moving from frame 1 to frame 2. The Optical Flow Accelerator is able to capture pixel-level information such as particles, reflections, shadows, and lighting, which are not included in game engine motion vector calculations.
No it doesn't. The two frames they're referring to are both previous frames and used by OFA. The rest uses one frame + it's motion vectors.
From NVIDIA:
The 'current frame' is already previous when the generated is displayed.
Can't seem to copy/paste image from NVIDIAs article on my phone, but they even clearly list the frame pairs as 1 scaled, 2 generated, 3 scaled, 4 generated
If they had the next frame to work with too, can you give a plausible explanation for how even UI elements, which don't change between two frames, get messed up?No, the language is extremely clear. Nvidia says the generated frame “transitions between” the two input frames. There is no ambiguity there.
Why would UI elements be involved? Is this method actually using the final frames after post-process and overlays, what is sent to the frame buffer for display? If so, that's even more useless.If they had the next frame to work with too, can you give a plausible explanation for how even UI elements, which don't change between two frames, get messed up?
I need to ask if we got all the session data saved up, but the publicly available data is quite different from that claim.
Haven't considered enough how the first frame presented. Don't take it seriously.Why t
The 0 moves?
That's how it looks, at least Spiderman exhibits this (linked post from Ada-thread in one my earlier posts)Why would UI elements be involved? Is this method actually using the final frames after post-process and overlays, what is sent to the frame buffer for display? If so, that's even more useless.
I really don't think that's going to be a massive deal. The types of games that this truly matters in... will all already perform incredibly without the need for DLSS3 and fake frames.The bigger problem compared to just latency is the fact that generated doesn't have any user input, so your inputs only affect every second frame.
How sure are we that these artifacts are not a result of YouTube video compression?That's how it looks, at least Spiderman exhibits this (linked post from Ada-thread in one my earlier posts)
Goodbye dodgy quarter frame rate anm on far away stuffPresumably this also allows the generation of new animation steps that wouldn't exist with native rendering - even if native was running at the same fps as DLSS 3. So potentially more fluid animations too?
Wouldn't they apply to more of the frames, like back to back?How sure are we that these artifacts are not a result of YouTube video compression?
Very sure, since they repeat with clear frames in between each, exactly, and there's constantly dlss2.x for reference next to it which doesn't show anything similar (as seen in said screenshots)How sure are we that these artifacts are not a result of YouTube video compression?
How sure are we that these artifacts are not a result of YouTube video compression?
Pretty sure it’s after. Don’t need ML for prior frames.I’m waiting for confirmation on this point. It’s not 100% clear whether generated frames are inserted before or after the last rendered frame. If it’s after then DLSS3 makes more sense to me. If it’s before it seems pretty useless. In that scenario going from 60 fps to 120 fps with DLSS3 will do nothing for fluidity of gameplay.