Digital Foundry Article Technical Discussion [2025]

So it really was about upscaling and framegen that you were most unhappy with in review coverage?

The topic is how should these things be communicated if they’re not part of the FPS number. I said reviewers have a poor record of reporting on features outside of avg FPS. Which part of that do you disagree with?

Do you rate feature coverage in GPU reviews as excellent, average, poor or unimportant? Simple question.

Edit: actually you have already answered that. You think the current review scene is “fine”. I think it’s not. No more to say on that.
 
Last edited:
The topic is how should these things be communicated if they’re not part of the FPS number. I said reviewers have a poor record of reporting on features outside of avg FPS. Which part of that do you disagree with?

Do you rate feature coverage in GPU reviews as excellent, average, poor or unimportant? Simple question.

Edit: actually you have already answered that. You think the current review scene is “fine”. I think it’s not. No more to say on that.
Reviews could always be better, nothing is perfect. That being said, I don't feel like there is much I'm missing out on after checking out the group of sites I rely on. Input lag being added to the benchmark charts would be nice, but I think it would be pretty hard for anyone to sample a few reviews and not come away with a fairly accurate idea of how these GPUs stack up to each other in performance and features.
 
Reviews could always be better, nothing is perfect. That being said, I don't feel like there is much I'm missing out on after checking out the group of sites I rely on. Input lag being added to the benchmark charts would be nice, but I think it would be pretty hard for anyone to sample a few reviews and not come away with a fairly accurate idea of how these GPUs stack up to each other in performance and features.

Here are a few examples of what I’m referring to from my personal experience. Now anyone can claim they’re irrelevant to their personal use case and therefore not worthy of a review but I am a heavy user.

In home streaming. Nvenc on the 3090 is incapable of encoding 4K above ~80fps. I had to discover this on my own through trial and error. I thought my card was broken before I found corroborating experiences online. I can’t think of a single outlet that has investigated encoding or streaming performance in the past 4 years. Nvidia certainly doesn’t advertise those limits.

DSR/DLDSR. I had to trawl through Reddit posts to get some kind of explanation of how the “smoothness” setting is actually flipped when you’re using DSR vs DLDSR. It’s confusing as heck. I recently started playing with RTX HDR and it’s the same story there.

This is what I mean by the experience of actually using a graphics card not being captured in avg FPS graphs.
 
Here are a few examples of what I’m referring to from my personal experience. Now anyone can claim they’re irrelevant to their personal use case and therefore not worthy of a review but I am a heavy user.

In home streaming. Nvenc on the 3090 is incapable of encoding 4K above ~80fps. I had to discover this on my own through trial and error. I thought my card was broken before I found corroborating experiences online. I can’t think of a single outlet that has investigated encoding or streaming performance in the past 4 years. Nvidia certainly doesn’t advertise those limits.

DSR/DLDSR. I had to trawl through Reddit posts to get some kind of explanation of how the “smoothness” setting is actually flipped when you’re using DSR vs DLDSR. It’s confusing as heck. I recently started playing with RTX HDR and it’s the same story there.

This is what I mean by the experience of actually using a graphics card not being captured in avg FPS graphs.
Just in case you hadn't seen it, as I happened upon it way back when I was digging through their SDK docs for the optical flow accelerator block, trying to see what the performance differences were between Ampere and Ada generations.


They only give info for 1080p 8-bit encoding, but if you take the HEVC p3 preset numbers and scale them downward based on 4k having 4x the number of pixels to encode, you're very much in the ballpark of ~80fps, doubly so if you were doing a 10-bit encode.

Edit: I do have to give Nvidia some credit for transparency in their table too, when the numbers aren't exactly flattering. Not only has encoding performance overall stayed largely flat since Pascal, Ada regresses in a lot of the presets compared to Ampere, and on some of the slower (high quality ones) even Pascal is significantly faster. They mention in one of the footnotes that performance scales essentially linearly with the video clocks, and then give those same video clocks as examples, with Ada clocked head and shoulders above the rest, and yet performing worse.

Like @trinibwoy says, it'd be a very interesting deep dive to try to figure out why encoder performance seems to regress. Do they just remove 'slices' or chunks of the decoder block to save die space every generation, knowing that the higher clocks will even everything out and keep performance roughly static?
 
Last edited:
That's the marketed use case but not how it plays out in practice. The input lag penalty when using a low base frame rate is so high you'd never want to use it. It also introduces more motion artefacts. Under no circumstances, I'd recommend anyone take a 30fps output and run FG on it.
What if a game is locked at a low framerate? I recently played Space Hulk: Vengeance of the Blood Angels, a game from the early 90s that is locked to 15FPS, and used Lossless Scaling frame generation to 4x scale that to 60. It wasn't an artifact free experience, but I found the input lag to be essentially the same (15FPS is heavy anyway), but the motion smoothness was insanely better. I've also experimented with Diablo 2 and LSFG, as that game is locked to 24fps. Again, the smoothness of motion benefits the game quite a bit. I'd also imagine a game like Riven would benefit from frame generation, because the FMV is at pretty low frame rates.
It's an interesting question though. If we've already accepted that the images we see on our monitors are in the "past" relative to the CPU simulation time then FG is technically interpolating "simulation steps" as well as "image updates". The hangup with FG is all about latency and reaction time.
Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.
 
I do have to give Nvidia some credit for transparency in their table too, when the numbers aren't exactly flattering. Not only has encoding performance overall stayed largely flat since Pascal, Ada regresses in a lot of the presets compared to Ampere, and on some of the slower (high quality ones) even Pascal is significantly faster. They mention in one of the footnotes that performance scales essentially linearly with the video clocks, and then give those same video clocks as examples, with Ada clocked head and shoulders above the rest, and yet performing worse.

That's disappointing. Nvidia is claiming something like 4x encoding performance on Blackwell. Maybe they're making up for lower performance by including multiple encoders? The 5080 has 2, the 5090 has 3.

Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.

Yeah one problem with this latency conversation is people seem to think the current situation isn't already quite messy. We shouldn't be comparing frame gen to some nirvana state that doesn't exist.
 
Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.
People who play FPSes have historically talked about this a lot actually. Poor networking can turn a decent player into garbage. I don't think I've seen Warzone get to 100 ms of lag though, although it would explain hitreg!

I remember back when I played competitive TF2 there were entire videos and articles written on network interpolation in the source engine. It was quasi-required to tinker with this stuff to be competitive.
 
FG also helps solve a bit of the chicken n egg dilemma when it comes to high refresh monitors.

This year samsung will have 500hz QD OLED monitors which will be sublime for motion clarity. Without FG, in anything but esports titles looking like PS2 era quality, you got no chance at that.

At the end of the day, IQ means nothing if it disappears the moment your mouse moves. That's why I'm a big advocate for oled based high refresh monitors. Yes I understand the burn in but I'm happy with the trade off.
these are very exciting times, technology wise. Maybe not to the extent of the late 90s and early 2000s -I got a Matrox G400 'cos of 32 bit depth colour and bump mapping, everyone was innovating-, but this reminds me of the 90s and the introduction of accelerator cards.

We transitioned from rasterizing from software -CPU- to GPU, although my Monster 3DFX wasn't considered a GPU.

Now the future is to move from GPU to GPU + partially AI, so instead of a GPU, Nvidia are selling us an AI accelerator.

It doesn't matter if a GPU is little or very powerful regarding raster power; if the AI corrects it, goes further and performs the same or better than a graphics card of the same level, then it's fine.

The AI is presenting increasingly perfected frames over time as it evolves.
 
FG uses two frames in the past.
I meant that the FG frame is put into the past between the last and current "real" one. And from here every new FG frame contains new information provided by a "system". And with these information FG is rendering a new frame. I dont see a difference to normal rendering outside of input processing.

It reminds me of Pathtracing. Without a denoiser the rendered frame is full of gaps between pixels. Denoising is used to fill these. But with the current software the end result differs from the intention of the host system. Even different denoiser can produce different results. FG is just filling gaps "in time".

Rendering is full of artefacts. FG biggest problem is latency. But image quality is really good, especially in games with Raytracing and Pathtracing. In these games the process works much better than upscaling mabye because the native image contains a lot more information.
 
Last edited:
It's not possible to normalize quality that way. What's considered equal quality is going to be totally subjective
It's possible, DF classified the differences between upscalers into about a dozen items, each item gets a score. DF already did it in a very simple and elegant way.

What reviewers should do is pick 13 games, cover each one separately in a side piece, and compare upscalers quality wise (they already do this for new games, but with performance only), use the conclusion from these side pieces to establish benchmarking rules in the next review.

For example, if they already established that DLSS Performance in Warhammer Darktide is better than FSR Quality, then all NVIDIA GPUs will be tested as such and compared against AMD GPUs running FSR Quality.

Another example, if they established that DLSS Quality is in fact better than native in Spider-Man, then all NVIDIA GPUs will be tested with DLSS Quality, and other GPUs will be tested with with native.

We need to cater for the actual user experience, not some idealized far from reality outcome.. most UE5 titles require upscaling now to perform in a satisfing way, most games have terrible TAA implementation and thus terrible native image quality and require DLSS or TSR to cover the shortcomings of TAA .. most users with average hardware use upscaling to gain performance in heavy titles, that's the reality of the situation, native is no longer a desired thing by users in most cases. If reviews don't factor this in their prcoess, then they are detached from reality and need to account for the new variables.
 
Last edited:
It's possible, DF classified the differences between upscalers into about a dozen items, each item gets a score. DF already did it in a very simple and elegant way.

What reviewers do is pick 13 games, cover each one separately in a side piece, and compare upscalers quality wise (they already do this for new games, but with performance only), use the conclusion from these side pieces to establish benchmarking rules in the next review.

For example, if they already established that DLSS Performance in Warhammer Darktide is better than FSR Quality, then all NVIDIA GPUs will be tested as such and compared against AMD GPUs running FSR Quality.

Another example, if they established that DLSS Quality is in fact better than native in Spider-Man, then all NVIDIA GPUs will be tested with DLSS Quality, and other GPUs will be tested with with native.

We need to cater for the actual user experience, not some idealized far from reality outcome.. most UE5 titles require upscaling now to perform in a satisfing way, most games have terrible TAA implementation and thus terrible native image quality and require DLSS or TSR to cover the shortcomings of TAA .. most users with average hardware use upscaling to gain performance in heavy titles, that's the reality of the situation, native is no longer a desired thing by users in most cases. If reviews don't factor this in their prcoess, then they are detached from reality and need to account for the new variables.

You could argue this is similar to the average frame rate only comparison bars we got in the past - particularly with duel GPU setups - where the primary info given in the review is the bar chart that shows one result, but their may be side note in the article somewhere pointing out that the "faster" setup is actually less smooth due to juddery framerates.

That info would later be included in the primary info as 1% and 0.1% lows, and later refined further with the likes of frametime graphs.

Right now superior image quality from upscalers and denoisers or lower latency from the likes of Reflex are often just a side note that a reader might skip over in favour of the primary performance info but I do agree that those aspects should be at least elevated to the same level of visibility as the main performance metrics. That said, I don't think they should necessarily replace them.
 
You can ignore new features at the beginning, but it would be silly to ignore it years later. In the lastest B580 video from HBU nearly every game supports Reflex - either as a standalone option or in combination with FG.
How could you just ignore a feature of a card which massively improve the experience of the buyer?

These reviewers do only care about their revenue. So if your videos get more views when you hate on a company then every positive will be downplayed or just ignored.
 
Last edited:
Now imagine you are playing a multiplayer game, like COD: Warzone, with 50-100ms of network lag and a server tick rate of 20. Everything your graphics card is doing between those ticks is inferred, and everything that's on your monitor is even more in the past.
This isn't how (most) online games work -- each client isn't moving in lockstep with the server and inferring what's happening -- client's are simulating the game locally with normal 16ms updates and sending what happened up to the server. The server is then, less frequently and with a delay, validating what happened, resolving differences between clients acceptably, and pushing necessary updates back.

There's a big difference between only taking input every n visual frames (producing input latency, as in a framegen game) vs getting a message every few frames that that corrects or revereses the some results of your input (producing rubber banding, as in a networked game)
 
People who play FPSes have historically talked about this a lot actually. Poor networking can turn a decent player into garbage. I don't think I've seen Warzone get to 100 ms of lag though, although it would explain hitreg!

I remember back when I played competitive TF2 there were entire videos and articles written on network interpolation in the source engine. It was quasi-required to tinker with this stuff to be competitive.
Yea Q3A, once you go above 150 fps IIRC, you start glitching certain jumps you can't normally make without it. Your bunny hop acceleration I think was also tied to your frame rate, it was easier to get up to speed quicker with higher fps.
 
It's possible, DF classified the differences between upscalers into about a dozen items, each item gets a score. DF already did it in a very simple and elegant way.
To reiterate this point, here is how DF classified the items (timestamped):

1-Static views
2-Camera movement
3-Animation movement
4-Flicker and Moire
5-Thin objects
6-Transparencies
7-Particles
8-Hair
9-Vegetation

 
Rich guest stars on IGN and reiterates his views on framegen performance. He tries to make a distinction between “framerate” and “performance” but I don’t know if that’s something the industry can get behind. Too nuanced for the average buyer.

 
So regarding the new NVIDIA cards, Blackwell data center GPUs have a dedicated decompression block. I wonder if the consumer cards will as well.

I would have thought they would have mentioned that during the keynote though?

There have been rumours that a dedicated gdeflate (or more generalized) decompression block would be how both companies would be tackling DirectStorage GPU decompression for upcoming gens since the current implementation looks to be largely a bust, I was hoping for more on that but perhaps they're focused more on their neural texture initiative...but that's a long way off before being adopted by developers.
 
Back
Top