Digital Foundry Article Technical Discussion [2025]

So it really was about upscaling and framegen that you were most unhappy with in review coverage?

The topic is how should these things be communicated if they’re not part of the FPS number. I said reviewers have a poor record of reporting on features outside of avg FPS. Which part of that do you disagree with?

Do you rate feature coverage in GPU reviews as excellent, average, poor or unimportant? Simple question.

Edit: actually you have already answered that. You think the current review scene is “fine”. I think it’s not. No more to say on that.
 
Last edited:
The topic is how should these things be communicated if they’re not part of the FPS number. I said reviewers have a poor record of reporting on features outside of avg FPS. Which part of that do you disagree with?

Do you rate feature coverage in GPU reviews as excellent, average, poor or unimportant? Simple question.

Edit: actually you have already answered that. You think the current review scene is “fine”. I think it’s not. No more to say on that.
Reviews could always be better, nothing is perfect. That being said, I don't feel like there is much I'm missing out on after checking out the group of sites I rely on. Input lag being added to the benchmark charts would be nice, but I think it would be pretty hard for anyone to sample a few reviews and not come away with a fairly accurate idea of how these GPUs stack up to each other in performance and features.
 
Reviews could always be better, nothing is perfect. That being said, I don't feel like there is much I'm missing out on after checking out the group of sites I rely on. Input lag being added to the benchmark charts would be nice, but I think it would be pretty hard for anyone to sample a few reviews and not come away with a fairly accurate idea of how these GPUs stack up to each other in performance and features.

Here are a few examples of what I’m referring to from my personal experience. Now anyone can claim they’re irrelevant to their personal use case and therefore not worthy of a review but I am a heavy user.

In home streaming. Nvenc on the 3090 is incapable of encoding 4K above ~80fps. I had to discover this on my own through trial and error. I thought my card was broken before I found corroborating experiences online. I can’t think of a single outlet that has investigated encoding or streaming performance in the past 4 years. Nvidia certainly doesn’t advertise those limits.

DSR/DLDSR. I had to trawl through Reddit posts to get some kind of explanation of how the “smoothness” setting is actually flipped when you’re using DSR vs DLDSR. It’s confusing as heck. I recently started playing with RTX HDR and it’s the same story there.

This is what I mean by the experience of actually using a graphics card not being captured in avg FPS graphs.
 
Here are a few examples of what I’m referring to from my personal experience. Now anyone can claim they’re irrelevant to their personal use case and therefore not worthy of a review but I am a heavy user.

In home streaming. Nvenc on the 3090 is incapable of encoding 4K above ~80fps. I had to discover this on my own through trial and error. I thought my card was broken before I found corroborating experiences online. I can’t think of a single outlet that has investigated encoding or streaming performance in the past 4 years. Nvidia certainly doesn’t advertise those limits.

DSR/DLDSR. I had to trawl through Reddit posts to get some kind of explanation of how the “smoothness” setting is actually flipped when you’re using DSR vs DLDSR. It’s confusing as heck. I recently started playing with RTX HDR and it’s the same story there.

This is what I mean by the experience of actually using a graphics card not being captured in avg FPS graphs.
Just in case you hadn't seen it, as I happened upon it way back when I was digging through their SDK docs for the optical flow accelerator block, trying to see what the performance differences were between Ampere and Ada generations.


They only give info for 1080p 8-bit encoding, but if you take the HEVC p3 preset numbers and scale them downward based on 4k having 4x the number of pixels to encode, you're very much in the ballpark of ~80fps, doubly so if you were doing a 10-bit encode.

Edit: I do have to give Nvidia some credit for transparency in their table too, when the numbers aren't exactly flattering. Not only has encoding performance overall stayed largely flat since Pascal, Ada regresses in a lot of the presets compared to Ampere, and on some of the slower (high quality ones) even Pascal is significantly faster. They mention in one of the footnotes that performance scales essentially linearly with the video clocks, and then give those same video clocks as examples, with Ada clocked head and shoulders above the rest, and yet performing worse.

Like @trinibwoy says, it'd be a very interesting deep dive to try to figure out why encoder performance seems to regress. Do they just remove 'slices' or chunks of the decoder block to save die space every generation, knowing that the higher clocks will even everything out and keep performance roughly static?
 
Last edited:
That's the marketed use case but not how it plays out in practice. The input lag penalty when using a low base frame rate is so high you'd never want to use it. It also introduces more motion artefacts. Under no circumstances, I'd recommend anyone take a 30fps output and run FG on it.
What if a game is locked at a low framerate? I recently played Space Hulk: Vengeance of the Blood Angels, a game from the early 90s that is locked to 15FPS, and used Lossless Scaling frame generation to 4x scale that to 60. It wasn't an artifact free experience, but I found the input lag to be essentially the same (15FPS is heavy anyway), but the motion smoothness was insanely better. I've also experimented with Diablo 2 and LSFG, as that game is locked to 24fps. Again, the smoothness of motion benefits the game quite a bit. I'd also imagine a game like Riven would benefit from frame generation, because the FMV is at pretty low frame rates.
It's an interesting question though. If we've already accepted that the images we see on our monitors are in the "past" relative to the CPU simulation time then FG is technically interpolating "simulation steps" as well as "image updates". The hangup with FG is all about latency and reaction time.
Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.
 
I do have to give Nvidia some credit for transparency in their table too, when the numbers aren't exactly flattering. Not only has encoding performance overall stayed largely flat since Pascal, Ada regresses in a lot of the presets compared to Ampere, and on some of the slower (high quality ones) even Pascal is significantly faster. They mention in one of the footnotes that performance scales essentially linearly with the video clocks, and then give those same video clocks as examples, with Ada clocked head and shoulders above the rest, and yet performing worse.

That's disappointing. Nvidia is claiming something like 4x encoding performance on Blackwell. Maybe they're making up for lower performance by including multiple encoders? The 5080 has 2, the 5090 has 3.

Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.

Yeah one problem with this latency conversation is people seem to think the current situation isn't already quite messy. We shouldn't be comparing frame gen to some nirvana state that doesn't exist.
 
Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.
People who play FPSes have historically talked about this a lot actually. Poor networking can turn a decent player into garbage. I don't think I've seen Warzone get to 100 ms of lag though, although it would explain hitreg!

I remember back when I played competitive TF2 there were entire videos and articles written on network interpolation in the source engine. It was quasi-required to tinker with this stuff to be competitive.
 
Back
Top