AMD's FSR 3 upscaling and frame interpolation *spawn

Ok good. I hope HUBs video doesn't imply that inaccuracy.

You pass a ui mask so it knows to ignore those pixels for frame generation. I'm just trying to find update details. It wasn't like that at launch.

5.0 TAG ALL REQUIRED RESOURCES

Buffers to tag

DLSS-G requires depth and motion vectors buffers.


Additionally, for maximal image quality, it is critical to integrate UI Color and Alpha or Hudless buffers:


  • UI Color and Alpha buffer provides significant image quality improvements on UI elements like name plates and on-screen hud. If your application/game has this available, we strongly recommend you integrate this buffer.
  • If UI Color and Alpha is not available, Hudless integration can also significantly improve image quality on UI elements.

1696716376212.png
 
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.

Hasn't that been obvious since day one from both videos and screenshots?
 
Ouch, even DLSS3 + frame gen looks like crap with the "swimming" and broken image. Long ways to go before any frame gen tech is usable by me (which is not to say that other's don't find it useful).

Regards,
SB

The artifacts in the FSR3 image are due to FSR2's upscaling, not the frame generation. That motion in Immortals in particular behaves very poorly with FSR2.

As for the DLSS3 image, freezing the individual frame where you see egregious artifacts is exactly a problem in evaluating these technologies that Alex mentioned in his first look at it, pixel-peeing frozen frames in a compressed stream is not accurately imparting the actual experience of what it's like in a game scenario. Latency aside, you want a starting point of 60+fps before engaging frame generation precisely because when there's elements where the artifacting is more prominent, it's on-screen for a tiny fraction of the time.

Yes, perfect recreation of every frame is the end goal and the less artifacts the better in any reconstruction tech, lord knows I've complained about DLSS2 artifacts on this site before. But, it has to be looked at in context. Framegen is a very difficult technology to communicate what it brings to the table in a video, you can't communicate what the latency feels like, and you can't really communicate what it looks like either when most analysis will be done on 60fps displays. Frame generation can produce artifacts, but not only are the generated frames on a 120+fps display being displayed for just 8ms, but the degree of artifacting can vary quite a bit frame to frame - the most egregiously artifacted frames may constitute a tiny fraction of the whole in even a several second span. This is quite different than say, artifacts from upsampling, which can exist on every single frame.


 
Last edited:
You pass a ui mask so it knows to ignore those pixels for frame generation. I'm just trying to find update details. It wasn't like that at launch.


View attachment 9771
It uses all this information as parameters for the model, but I think it's not objectively interpolating the final image (Final Color Pass) but instead the Hudless frame, taking into consideration Depth and Motion Vectors, then interpolating the UI (UI Color and Alpha) in a different pass and finally merging both results to produce the final interpolated frame. The Final Color Pass may serve for additional adjustments. But that's just my educated guess, they can be pretty much be interpolating everything on a single pass but providing everything as parameters to the model. I just don't think they would be doing that because working with two specialized models - one for UI and one for the scene - is way easier than trying to tweak a single generic model.
Watching the video I was surprised to discover that Nvidia's frame generation is performed on the final frame, after the UI is rendered? To me that makes the Nvidia technology far worse. I wondered why screenshots would show ghosting in UI elements.

AMDs solution you have the option to apply FG before UI elements? Seems crazy Nvidia don't offer this. If that's true, a weakness of how they're using the optical flow hardware to perform FG which isn't accessible during the render stages, only final output?
The main difference, based on the information we have about FSR3 FG, is that FSR3 FG allows for the UI to be rendered decoupled from the scene, so in theory, the UI can be rendered at the display frame rate rather than at native/engine frame rate (AMD calls the final frame rate, which is the one we see on statistics, as of "display frame rate").

In practice, this means that the interpolated frame will actually have the UI rendered inside the engine and placed on top of the final result, producing a more accurate result in regards of UI elements without any artifacts. In this "decoupled" model, the engine provides a callback which is basically a function that FSR3 FG can call to render the UI inside a buffer (a region in the memory). The idea is that if you can render everything without the UI for upscaling, you should be able to render the UI without everything else for the interpolation.

Obviously, that implies rendering the UI after the "Frame A", but prior to "Frame B", and once "Frame B" is done, it interpolates A and B, then places the previously rendered UI on top of the result. This is a little more worker for developers, but totally doable, the UI is very light to render (both CPU and GPU wise).

DLSS3 FG still have problems with UI, but that is because they're interpolating the UI, and although it's not a problem exclusive to AI, it's well known that AI is bad with reconstruction of small elements in general (glyphs, lines, vector paths, etc). Have you ever noticed that we still don't have the technology to upscale blurrier text without destroying it completely?

Nvidia is trying to solve the UI problems tweaking the model and probably with some specialized algorithms, but I don't think they can ever solve the problem if they completely rely on AI for this job, however things may change in the future. Using this decoupled model should work for Nvidia as well, but it's completely up to them the decision to use the same strategy or not.

FSR3 FG certainly has other modes that may cause artifacting as well, but at least developers got a choice here.
 
Both games with FSR3 so far run the hud at internal frame-rate - I do not think many will opt to run the hud at native due to CPU concerns and how it does not make much sense If you have diagetic hud/3D HUD.
 
To be fair to DLSS 3, wasn't it marketed for when your CPU limited and not necessarily GPU limited?

Or is my memory fuzzy.
You can use FG for every situation.

There are two best cases:
CPU limited and heavy GPU limited while staying way under the upper VRR range limit (let say 100 FPS -> 160 FPS on a 240Hz display). Especially the last one is a problem when you hit the VRR limit. Native FPS will reduced to half of it and latency will be worst (instead of 100 FPS it would be fall down to 80 FPS on a 160Hz display).
 
You can use FG for every situation.

There are two best cases:
CPU limited and heavy GPU limited while staying way under the upper VRR range limit (let say 100 FPS -> 160 FPS on a 240Hz display). Especially the last one is a problem when you hit the VRR limit. Native FPS will reduced to half of it and latency will be worst (instead of 100 FPS it would be fall down to 80 FPS on a 160Hz display).

I didn't really ask that.

My comment was around how DLSS 3 was originally marketed as being for, which as I remember was only for CPU limited situations.
 

From your link

"NVIDIA DLSS 3 Can Double CPU Bound Performance

DLSS Frame Generation executes as a post-process on the GPU, allowing the AI network to boost frame rates even when the game is bottlenecked by the CPU. For CPU-limited games, such as those that are physics-heavy or involve large worlds, DLSS 3 allows GeForce RTX 40 Series graphics cards to render at up to twice the frame rate over what the CPU is able to compute. In Microsoft Flight Simulator for example, with the 1:1 real-world recreation of our planet, DLSS 3 boosts FPS by up to 2X"

I always remember it being pushed more for CPU limited scenarios when it released rather than GPU ones.
 
From your link

"NVIDIA DLSS 3 Can Double CPU Bound Performance

DLSS Frame Generation executes as a post-process on the GPU, allowing the AI network to boost frame rates even when the game is bottlenecked by the CPU. For CPU-limited games, such as those that are physics-heavy or involve large worlds, DLSS 3 allows GeForce RTX 40 Series graphics cards to render at up to twice the frame rate over what the CPU is able to compute. In Microsoft Flight Simulator for example, with the 1:1 real-world recreation of our planet, DLSS 3 boosts FPS by up to 2X"

I always remember it being pushed more for CPU limited scenarios when it released rather than GPU ones.

"Boosting game performance over 4 times, compared to brute force rendering". Yes, it can help in CPU limited scenarios which is a unique feature of frame generation compared to upscaling, but it certainly wasn't marketing as only for CPU limited situations. Nvidia marketed it as a gen on gen performance boost overall.
 
This is 200% zoom at 6% speed and the output has nothing to do with FSR 3. This is an fsr 2 artifact that existed prior to fsr3. I've seen the fsr 2+3 implementation myself and while it's not as good as DLSS, it's more than fine at sensible frame rates.

I don't care if it's because of FSR2 or 3.

I care about what's presented to me on my screen, and what's in that screen shot it was is presented to people.

And FYI - It's notable in actual gameplay too.
 
I don't care if it's because of FSR2 or 3.

I care about what's presented to me on my screen, and what's in that screen shot it was is presented to people.

And FYI - It's notable in actual gameplay too.
In the process of bashing the wrong target you forgot that framegen ("FSR3") can be used without scaling ("FSR2"), which frees you from those issues. But that's not important is it?
 
Back
Top