Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
I wonder if the ability to generate intermediate frames has given them the confidence to spend more GPU resources generating higher quality upscaled frames, allowing them to improve the quality further?

Of course they may have simply done both at the same time.. but again, I'm skeptical. Guess we'll find out soon enough. I'm particularly excited for what this tech could provide for handheld PCs like the Deck, Ally, and future iterations.

This quote from the DF article

A combination of motion vector input from FSR 2 and optical flow analysis is used. DLSS 3 uses a hardware block to achieve the latter, of course, while FSR 3 uses software instead, running using asynchronous compute. The more a game uses async compute, the less resources there are for FSR 3 meaning that the time taken to generate the interpolated frame is longer.

That makes me think two things.

1. The older GPU's this works on might see a massively reduced uplift as they won't have the spare compute available to process the OFA data.

2. How much of an improvement are compute heavy engines like UE5 going to see even in big RDNA3 GPU's?

The OFA might become a bottleneck and limit how much of a boost FSR3 gives, and remember they also use compute for RT work too.

So AMD GPU's will have graphics, RT and now frame gen all fighting for those compute cycles.

At least the OF data on Nvidia is a dedicated hardware unit so shouldn't cause a bottleneck.
 
Did they improve the quality of FSR2 in FSR3?

I'm doubting it's ability to remain as sharp and clear as DLSS while in motion.. which is where FSR2 really breaks down.. especially at lower internal resolutions. If they've managed to improve that, and the frame gen holds up, then massive kudos to them.
The gentlemen of DF seem to think FSR3 matches DLSS3.5. I believe they say it was FSR 2 quality. So perhaps there have been additional optimizations to FSR 2
 
I will 100% use FSR3, even if the quality is not quite as good as DLSS. The only exception to that might be if it just had unbearable ghosting. I generally play at very high framerates so the artifacts from DLSS, FSR are not as noticeable because the difference between frames is smaller. I'll likely be trying to get from 100+ to 200+ fps with this, if possible.
 
I will 100% use FSR3, even if the quality is not quite as good as DLSS. The only exception to that might be if it just had unbearable ghosting. I generally play at very high framerates so the artifacts from DLSS, FSR are not as noticeable because the difference between frames is smaller. I'll likely be trying to get from 100+ to 200+ fps with this, if possible.
To be fair.. FSR2 already allows for similar frame times between it and DLSS2 (similar framerate gains) and yet the output is obviously worse.. so I'm not sure that doubling framerates with intermediate frames is going to make it any better. You're still rendering a similar amount of "genuine" frames.

Am I wrong on that? Maybe it blurs better? Or maybe they just improve the algorithm :D
 
To be fair.. FSR2 already allows for similar frame times between it and DLSS2 (similar framerate gains) and yet the output is obviously worse.. so I'm not sure that doubling framerates with intermediate frames is going to make it any better. You're still rendering a similar amount of "genuine" frames.

Am I wrong on that? Maybe it blurs better? Or maybe they just improve the algorithm :D

I mean the artifacts from DLSS upscale and FSR2 seem to look worse if your input framerate is 60 vs 120. The lower your framerate the bigger the difference between the frames is going to be, probably making it harder to nicely reuse samples while upscaling causing more artifacts. The same is going to be true with DLSS frame generation and FSR3. They recommend at least 60 fps as the input, because it'll be easier to generate frames from two frames with smaller differences. So if I'm playing a game at 120 fps and turn on FSR3 to try to get to 240, it'll probably look a lot cleaner than if I was playing at 60 and trying to get to 120.
 
I mean the artifacts from DLSS upscale and FSR2 seem to look worse if your input framerate is 60 vs 120. The lower your framerate the bigger the difference between the frames is going to be, probably making it harder to nicely reuse samples while upscaling causing more artifacts. The same is going to be true with DLSS frame generation and FSR3. They recommend at least 60 fps as the input, because it'll be easier to generate frames from two frames with smaller differences. So if I'm playing a game at 120 fps and turn on FSR3 to try to get to 240, it'll probably look a lot cleaner than if I was playing at 60 and trying to get to 120.
I'll be real.. my expectations are developers use this to bump from ~30 to 60/120. So I'm expecting it to look similar to FSR2 as it stands, even at double the framerate.
 
I'll be real.. my expectations are developers use this to bump from ~30 to 60/120. So I'm expecting it to look similar to FSR2 as it stands, even at double the framerate.

I doubt any console games will use it to go from 30 to 60. It'll probably look horrible. What I do think might happen is they use it for 120 fps modes instead of trying to manage resolution and settings. 120 fps mode will just be frame generation turned on, which might not be ideal for multiplayer games.

On PC it's really up to the user. They just add the technology and the user adjusts their settings. I'll happily turn settings to low and turn on frame generation to get 240Hz to match my display, or to power through cpu limited games. Ideally I'll try to get 130-140 fps consistently and then turn it on to hit 236 capped so I can stay within gsync and have overhead to handle that async compute cost.
 
I doubt any console games will use it to go from 30 to 60. It'll probably look horrible. What I do think might happen is they use it for 120 fps modes instead of trying to manage resolution and settings. 120 fps mode will just be frame generation turned on, which might not be ideal for multiplayer games.

On PC it's really up to the user. They just add the technology and the user adjusts their settings. I'll happily turn settings to low and turn on frame generation to get 240Hz to match my display, or to power through cpu limited games. Ideally I'll try to get 130-140 fps consistently and then turn it on to hit 236 capped so I can stay within gsync and have overhead to handle that async compute cost.
The games aren’t technically 30 on the dot. So if there is some method to increase what is there on console to inject enough frames to bring it up to 60 that would be great for the consoles.
 
The "for every DX11/DX12 game" part only works at the driver level on AMD cards. That's a really good feature though.
It’s a big lift technically since all consoles should fall under this umbrella.

Some of those older games could get quite a boost. Only wished this worked well with 30fps titles since that was a majority of last generation.
 
Can FSR 3 work without upscaling being used? (As I believe FG can be on Nvidia?)
Hmmm, I'm fairly sure it should be able to. They DID say that the frame generation uses the motion vector information from FSR2... but that doesn't necessarily mean that FSR2 needs to be active.. just that the game (which supports FSR2) will need the motion vector info to support FSR3 natively.
 
Hmmm, I'm fairly sure it should be able to. They DID say that the frame generation uses the motion vector information from FSR2... but that doesn't necessarily mean that FSR2 needs to be active.. just that the game (which supports FSR2) will need the motion vector info to support FSR3 natively.

If I could dlss upscale with fsr3 frame generation, I’d be very happy, but I doubt that’ll work.
 
This quote from the DF article



That makes me think two things.

1. The older GPU's this works on might see a massively reduced uplift as they won't have the spare compute available to process the OFA data.

2. How much of an improvement are compute heavy engines like UE5 going to see even in big RDNA3 GPU's?

The OFA might become a bottleneck and limit how much of a boost FSR3 gives, and remember they also use compute for RT work too.

So AMD GPU's will have graphics, RT and now frame gen all fighting for those compute cycles.

At least the OF data on Nvidia is a dedicated hardware unit so shouldn't cause a bottleneck.
I agree with 1. Most other games are going to be console first games anyway so I'm not sure how big of a deal it is going to be. Outside of RT on AMDs side most of these GPUs are going to be under utilized for the rest of this generation IMO.
 
Status
Not open for further replies.
Back
Top