Digital Foundry Article Technical Discussion [2025]

I see various people commenting and showing the artifacting and other issues that DLSS4 exhibits in this DF video, but I really hope they keep in mind what DF were clear to point out about frame persistence being much lower. Once you're in the 240hz+ zone these frames are displaying so fast that I'm sure most people wouldn't see them or think they're that bad. Their capture setup isn't even capturing half the frames that the GPU produces, and Youtube also isn't able to display every frame captured.. and that's at 50% speed reduction to bring more attention to these very issues.

Obviously there's artifacts, there's some obvious issues still.. but we're not getting the full representative experience at this point. We don't play games 400x zoomed in on a group of pixels, and we're not slowing things down half speed to show all the frames over a period of time either.. so it's more important than ever that the player experience is discussed in better detail. How does it feel? How does it look? How does it present to the player.. as compared to the previous generation products.. not just numbers.. because they're becoming increasingly more irrelevant every generation.
 
according to DF video, latency for FGx2 seems to be 51ms average, 55ms average for FGx3 and 57ms for FGx4, which imho isn't bad at all, although I haven't seen native latency without FG, that'd be the most interesting number.
 
Yes, but the point I think Andrew was making though with DLSS4 is that now frame generation is even more removed from the latency characteristics one would achieve from a game with that same native framerate. That would still be the case if DLSS4 has no latency added at all over DLSS3.

So showing something running at "240fps" with DLSS3, would mean you would be getting the latency of that game running at 120fps without framegen. But if you have a game you're advertising as '240fps with DLSS4', that could mean you'd be getting the latency of that game running at ~60fps.
Not quite 60fps lol. But the point is made.
Whether you add 1 more frame or 3 more frames, the latency isn’t multiplicative.
So say you have 30fps.
That’s 33ms latency when you go to 60fps.
When you got to 120fps, it will still stick around 33ms latency, maybe 40ms. Depending on GPU load. Because instead of generating 1 frame in-between it generates 3, but that won’t increase latency by 4x since you’re also reducing the refresh time by 4x.

But there is value in having the refresh rate reach the maximum refresh rate of the monitor. I think this part is being missed in the discussion. Motion resolution is a critical piece of image quality as well. The higher the refresh rate, yes it’s smoother, but it can bring about significantly higher motion clarity.
 
I agree with this in principle but have no idea how you properly communicate the nuance to the general public.
I mean... just call it motion interpolation. People buying $2000 (or even $550) GPUs generally know what motion interpolation is, TVs have had it for many years now.

What makes it complicated is Nvidia is trying to take this feature and act like its a genuine 4x of performance, which for reasons discussed is not exactly true.
 
according to DF video, latency for FGx2 seems to be 51ms average, 55ms average for FGx3 and 57ms for FGx4, which imho isn't bad at all, although I haven't seen native latency without FG, that'd be the most interesting number.
Yes, FG doesn't actually add much latency at all, but it doesn't *decrease* latency like a real doubling of frames would. That's my principle issue with calling interpolation 'performance', and others are on the same road here.
 
FG is best used when frame rate is already high to start with.

60fps or more for SP games
100fps or more for MP games

Goal is get you closer to your monitors native refresh rate so you can achieve the maximum motion clarity. That’s all FG does.

The latency you feel is more like your starting baseline fps. Ideally having a counter that breaks apart your native fps and FG derived fps would make it easier for users and reviewers.
 
Even then how do you compare 4x framegen across IHVs when there can be massive quality, consistency and latency differences? There’s no perfect way to do it that works for marketing to the masses.
There can be difference in latency between IHV without frame gen. Just turning on Reflex or Antilag can have a noticeable difference in lag in many cases, but those do not perform the same. nVidia's slides show a 33% reduction in input delay in Destiny just by turning on Reflex at 60FPS. And turning on Reflex often results in slightly lower FPS, so this talk equating latency with performance gets even muddier.

We've had this conversation here before, in fact. I don't think there was a consensus reached, and I doubt there will ever be, what "performance" means. But for me, FPS is clearly defined as frames per second. I don't care how the frames are produced - rasterized, generated using ai, or simple interpolation, because if they are frames, they count as frames. I think other people are attach benefits to increasing frame rate that aren't always constant. Look at that nVidia slide from earlier in my post, they are saying Destiny has 75ms of latency at 60hz! That's 4-5 frames of latency. With Reflex, moving to a 3080, and 360hz they reach 31ms. Valorant has 30ms of latency with Reflex off at 60hz. And this is why I've never felt like FPS and latency were interchangeable like others do. Related, yes, but latency is so wildly variable by game, by settings, and worst of all, subjective to user perception.

I've personally found the added latency from frame generation to be marginal in most cases, but I've found the benefits of FG to be extremely noticeable, especially with higher multipliers. I find games that run in the 40-50 range to be jittery messes, but frame gen'd up to 80-100 they are quite playable. The real question is, if I were to get new hardware that would allow me to hit 60-70fps native, would that be better than the 80-100fps I get using frame gen? Would I notice the latency difference? And would I just keep frame gen on and go for 120-140, living with the latency penalty of frame generation still?
 
I do find it ironic that nvidia tried to push input latency onto hardware reviews years back when they had the reflex advantage but no one seemed to really want to push their narrative. If only reviewers ran with it back then and frame/input latency became a common metric now they could be using it possibly against them, but they didn't so we end up with but but latency.

The general user who consumes that type of content by now would be somewhat comfortable with the metric and understand it enough they wouldn't need to be screamed at from the mountain tops why the 5070 isn't really as powerful as a 4090 in traditional metrics.
 
I think this will complicate video card reviews quite a lot. That could be a good news and bad news for reviewers. Good news being now there are so much more to do to differentiate from other reviewers, bad news being now there are so much more to do :)

There are a few competitive gamers who likes to have highest frame rate possible and I think the job is much easier, just benchmark GPU with lowest settings at very low resolution. However, for most people, if a GPU is able, probably prefer to have a stable frame rate than an absolutely highest frame rate, so the optimization would be to set the graphic quality as high as possible as long as it's stable at the desired frame rate. This is hard to quantify and hard to do comparisons.

However, what we know is that, in general, if a GPU has more power ("rawr power!") it's generally able to handle more difficult situations than a less powerful GPU. So a traditional "non-frame gen non-upscaling" benchmark is still valuable because it lets you know what the baseline performance is. The nuances would be something like "upscaling tech A is generally better than upscaling tech B so cards from vendor A generally needs less raw power to perform at the same level."

Another measurement can be done is latency. If a GPU has less latency than another GPU when playing the same game, it's probably better. However, latency is not that important to most people (although it can be quite important to some people), and it has larger variations and is also harder to control (e.g. contribution from CPU will be more significant).
 
Since you can have a game with high latency that still feels good, can you have game with low latency that feels bad? 👹

That would add a caveat of subjective meaningless to the end of any latency review section.
 
dunno tbh, might watch the video a second time.

Anyways, nVidia hasn't only destroyed the competition this gen -AMD and Intel- but also current consoles. Who is going to justify the ridiculous price of current gen consoles that are like 16 times slower than a 5090?
Anyways, nVidia hasn't only destroyed the competition this gen -AMD and Intel- but also the Switch. Who is going to justify the ridiculous price of the switch that is like 256 times slower than a 5090?

Tech enthusiasts are always going to prefer the PC platform, this isn't a new thing of this generation. If you want the best image quality and framerate, the choice is always clear, given a high budget.

There isn't much that console manufacturers can do, since GPU manufacturers come out with new and more costly GPU's every 2 years.

But for most people image quality and framerate aren't the most important things, and they settle for "good enough" with a streamlined experience. That's why consoles exist.
 
About the frame gen discussion, if I said...
"You get the best gameplay experience at 30 fps instead of 30 to 120 with frame gen X4 since you'll get the lowest input lag"

Would I be right? I would say yes, unless we were talking about a really slow experience like a strategy game or a visual novel.

Ps: reflex or antilag don't really matter honestly, since you can activate them with or without frame gen active, so the base framerate will always have an advantage.
 
I shouldn't care about what the relative latency between two GPUs is in a single game? Because FPS varies when I change settings does that mean I don't care about the relative smoothness of different GPUs at the same settings
You should care of course, but going as far as to claim frame gen is not performance is extreme.

A 5070 absolutely will not be an equivalent experience to me as my 4090
Yeah, I think we can all agree this is outrageous, and shouldn't happen. However, in the interest of continuing the discussion, what if:
The 4090 in game A provided 120 of frame generated fps at 50ms latency, while the 5070 provided 240fps at 60ms in that very same game. Which one is better? Latency did increase by 20% with the 5070, but the visual smoothness increased by 2x .. it's a trade off that you take based on your needs (just like activating Vertical Sync/Ultra rendering ...et). Better yet .. what if the generated frames with the 5070 were of higher quality (less artifacts, better upscaling), in that case you would also gain more visual quality in addition to visual smoothness.

Or how about this: a 4090 doing 60fps native frames at 40ms latency, vs the 5070 doing generated 240fps at 60ms? While latency is 50% higher on the 5070, visual smoothness is 4x, which is better? It's another trade off you need to make.

It's the same tradeoff as rendering at Ultra quality in a game like Red Dead 2, you increase latency, and you decrease fps, and as a result increase latency again, it's a trade off, you gained visual quality.
Therefore I think we need to explicitly report the responsiveness differences
Agreed. This data should be presented for the user to decide, but I am telling you it's not going to matter, latency varies all over the place from game to game, the user will not be able to decide what's his favorite latency is, it's not the same as frame rates (where most users prefer 60fps), as long as latency is within acceptable ranges, most users will be satisfied.

Besides, e sports titles are rarely benchmarked (they are often CPU limited with high end GPUs), the benchmark suite is often full of single player games (where small variances in latency rarely matter).

never compare the "FPS" of frame generation to non-frame generation, because it's apples and oranges
That's debatable. As explained above, it's a tradeoff.
 
Last edited:
I think this will complicate video card reviews quite a lot.

I don’t think it will. Nearly all reviewers ignored framegen on Ada and they’ll probably do the same here. Reviewers stick to benchmark graphs and that’s it. Heck most reviewers still treat RT like some special setting that needs to be benchmarked separately. GPU reviews have been quite pedestrian for a long time.

Ironically this probably drives marketing teams bonkers and encourages them to oversell features even more since reviewers mostly ignore them.
 
60 fps (or hz) and 120 is really thinking about it from a TV and console perpsective and not PCs and monitors.

Very few people would have 120hz monitors. High refresh monitors for the most part almost immediatly outside of the first couple of releases moved onto 144hz as being the defacto standard for high refresh. While for the last couple of years the low end for high refresh has really been 165hz, with now 170-180hz being common before the jump up to 240hz.

The vast majority of people buying these GPUs have high refresh monitors as the cost (and other compromises) to adopt them has basically been neglible now for years. Really if you're still using a 60hz monitor that should be your upgrade point before anything just to get actual VRR support with a proper low end from LFC.

Lastly the 60fps baseline as far as I know is not accepted as you frame it. We've had debate on it down to as far as ~40 fps and also there's debate on how it varies depending on the implementation so there is no universal baseline.
Clearly when I'm talking about 120hz monitors/displays, I also mean 144hz monitors. Sorry I didn't clarify, but it should be obvious they are very much in the same, quite small ballpark here. 144hz isn't some different 'super high refresh rate' category than a 120hz display so I have no idea why you're treating it like one in the context of my argument here. They're for all intents and purposes the same thing.

So no, the vast majority of PC gamers who will be buying 50 series parts in general are NOT going to have displays that are above 120/144hz. Not even remotely close. I'd be very surprised if they make up even 10% of the total market for such buyers(remember most people who want super high refresh rate monitors are doing it for low demand competitive games and who want "real" frames for the input lag benefits). Dont make the mistake of living in a super enthusiast bubble online where such people are vastly overrepresented compared to the real world.

And yes, we've had a 'debate' on this tiny forum of like 10 people, but overall, it's still more widely accepted that 60fps is what people should be aiming for to achieve optimal frame generation quality, and Nvidia and AMD are on that side saying this as well.
 
In a time with UE5 having the option of proper upscaling and FG is godsend. What is the point of DP2.1, high Hz displays and something like 4K@240Hz, when a typical UE5 game runs at best with 120 FPS in 1080p on a 4090?!

So, complaining about possible solution to a problem of a third party is behind me. Dont like FG? Lobbying for better engines.
 
A real world example:

I play CoD 6 MP at 3440x1440 on my AW3423DW which is 175fps.

With everything maxed out, I average around 110fps on a 4090. Extreme preset with motion blur disabled and DLAA.

Playing with above but with FG enabled gets nets 170fps which is a much smoother experience. I don't notice any more input lag but I do notice the smoothness from the motion clarity of the monitor running close to it's native.

Am I trying to make a career out of esports? nope. if that was the case, I'd get a 27inch 480hz panel but my usecase, FG enabled even in MP is nicer than without.

Again, if your base FPS is high enough, FG is great. It's not meant for going from 30 to 50fps which for some reason people get fixated on.
 
A real world example:

I play CoD 6 MP at 3440x1440 on my AW3423DW which is 175fps.

With everything maxed out, I average around 110fps on a 4090. Extreme preset with motion blur disabled and DLAA.

Playing with above but with FG enabled gets nets 170fps which is a much smoother experience. I don't notice any more input lag but I do notice the smoothness from the motion clarity of the monitor running close to it's native.

Am I trying to make a career out of esports? nope. if that was the case, I'd get a 27inch 480hz panel but my usecase, FG enabled even in MP is nicer than without.

Again, if your base FPS is high enough, FG is great. It's not meant for going from 30 to 50fps which for some reason people get fixated on.

I think the main use case is taking low frame rates and propping them up. We can always achieve super high frame rates by lowering settings.
 
Back
Top