You pass a ui mask so it knows to ignore those pixels for frame generation. I'm just trying to find update details. It wasn't like that at launch.
Streamline Integration Framework. Contribute to NVIDIAGameWorks/Streamline development by creating an account on GitHub.
github.com
View attachment 9771
It uses all this information as parameters for the model, but I think it's not objectively interpolating the final image (Final Color Pass) but instead the Hudless frame, taking into consideration Depth and Motion Vectors, then interpolating the UI (UI Color and Alpha) in a different pass and finally merging both results to produce the final interpolated frame. The Final Color Pass may serve for additional adjustments. But that's just my educated guess, they can be pretty much be interpolating everything on a single pass but providing everything as parameters to the model. I just don't think they would be doing that because working with two specialized models - one for UI and one for the scene - is way easier than trying to tweak a single generic model.
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.
AMDs solution you have the option to apply FG before UI elements? Seems crazy Nvidia don't offer this. If that's true, a weakness of how they're using the optical flow hardware to perform FG which isn't accessible during the render stages, only final output?
The main difference, based on the information we have about FSR3 FG, is that
FSR3 FG allows for the UI to be rendered decoupled from the scene, so in theory, the UI can be rendered at the
display frame rate rather than at native/engine frame rate (AMD calls the final frame rate, which is the one we see on statistics, as of "
display frame rate").
In practice, this means that the
interpolated frame will actually have the UI rendered inside the engine and placed on top of the final result, producing a more accurate result in regards of UI elements without any artifacts. In this "decoupled" model, the engine provides a callback which is basically a function that FSR3 FG can call to render the UI inside a buffer (a region in the memory). The idea is that if you can render everything without the UI for upscaling, you should be able to render the UI without everything else for the interpolation.
Obviously, that implies rendering the UI after the "Frame A", but prior to "Frame B", and once "Frame B" is done, it interpolates A and B, then places the previously rendered UI on top of the result. This is a little more worker for developers, but totally doable, the UI is very light to render (both CPU and GPU wise).
DLSS3 FG still have problems with UI, but that is because they're interpolating the UI, and although it's not a problem exclusive to AI, it's well known that AI is bad with reconstruction of small elements in general (glyphs, lines, vector paths, etc). Have you ever noticed that we still don't have the technology to upscale blurrier text without destroying it completely?
Nvidia is trying to solve the UI problems tweaking the model and probably with some specialized algorithms, but I don't think they can ever solve the problem if they completely rely on AI for this job, however things may change in the future. Using this decoupled model should work for Nvidia as well, but it's completely up to them the decision to use the same strategy or not.
FSR3 FG certainly has other modes that may cause
artifacting as well, but at least developers got a choice here.