Digital Foundry Article Technical Discussion [2025]

It's not meant for going from 30 to 50fps which for some reason people get fixated on.
I think it gets attention because in very high demand games, or at least games where visuals are noteworthy and people want to be able to crank up image quality and settings, the main thing they need from their GPU is to get them to that 60fps mark in the first place. And if frame generation cant help them there, then its utility is limited.

Frame generation is a bit like the rich becoming richer. Once you achieve a certain amount of money, your ability to then gain much more money improves dramatically. But just getting to that 'certain amount of money' in the first place is what most people are fighting for.
 
I think the main use case is taking low frame rates and propping them up. We can always achieve super high frame rates by lowering settings.
Lowering general settings often gets you only so far, and the biggest gains usually come with very painful visual compromises. Similarly, you get big gains by lowering resolution a chunk, but many people will not want to sacrifice image quality to that degree.

I think most people want frame generation to help them get higher framerates without having to make big visual compromises.
 
There can be difference in latency between IHV without frame gen. Just turning on Reflex or Antilag can have a noticeable difference in lag in many cases, but those do not perform the same. nVidia's slides show a 33% reduction in input delay in Destiny just by turning on Reflex at 60FPS. And turning on Reflex often results in slightly lower FPS, so this talk equating latency with performance gets even muddier.

We've had this conversation here before, in fact. I don't think there was a consensus reached, and I doubt there will ever be, what "performance" means. But for me, FPS is clearly defined as frames per second. I don't care how the frames are produced - rasterized, generated using ai, or simple interpolation, because if they are frames, they count as frames. I think other people are attach benefits to increasing frame rate that aren't always constant. Look at that nVidia slide from earlier in my post, they are saying Destiny has 75ms of latency at 60hz! That's 4-5 frames of latency. With Reflex, moving to a 3080, and 360hz they reach 31ms. Valorant has 30ms of latency with Reflex off at 60hz. And this is why I've never felt like FPS and latency were interchangeable like others do. Related, yes, but latency is so wildly variable by game, by settings, and worst of all, subjective to user perception.

I've personally found the added latency from frame generation to be marginal in most cases, but I've found the benefits of FG to be extremely noticeable, especially with higher multipliers. I find games that run in the 40-50 range to be jittery messes, but frame gen'd up to 80-100 they are quite playable. The real question is, if I were to get new hardware that would allow me to hit 60-70fps native, would that be better than the 80-100fps I get using frame gen? Would I notice the latency difference? And would I just keep frame gen on and go for 120-140, living with the latency penalty of frame generation still?
the thing is that nVidia has pointed out that the RTX 5000 series are running games with lower latency and fewer artifacts, while adding much more framerates, and that's true.

Linus has been playing games on a RTX 5090 and a RTX 4090, and you can see how it went....

BRweytc.png


High latency, where?

No company is going to focus on raster anymore because you end up with a machine that sucks 4000W like the most powerful hairdryer, it doesn't fit in your entire room, and that still doesn't natively run PT at more than 30 FPS.
 
Last edited:
Lowering general settings often gets you only so far, and the biggest gains usually come with very painful visual compromises. Similarly, you get big gains by lowering resolution a chunk, but many people will not want to sacrifice image quality to that degree.

I think most people want frame generation to help them get higher framerates without having to make big visual compromises.
If you want maximum graphical settings with maximum frame rate than that is what FG is for, outside of engine rendering speed, if brute force silicon is cost prohibitive, I’m not seeing a path other than AI that seems to have any real impact. To obtain the speeds that DLSS4 is obtaining would require significantly more silicon and power.
 
Taking into account frame generation is from my understanding only good from at least around 60fps we are talking about difference between 120fps vs 200fps (old generation vs new generation), imo marginal difference and not worth additional latency.
 
If we ignoring FG (and upscaling) then there is no future for high speed displays and higher resolution anymore. This here is Black Myth with full settings (Lumen) and best possible performance on a 4090:


DLSS performance in 4K - 86 FPS. How many people can buy a 4090? Or in the future a 5080? Even a 5070 costs over $500.
 
Taking into account frame generation is from my understanding only good from at least around 60fps we are talking about difference between 120fps vs 200fps (old generation vs new generation), imo marginal difference and not worth additional latency.
well, from personal experience with Lossless Scaling and FGx4 you don't need 60fps to get the best experience. At 30fps base artifacts are more common, but I locked many games at 41fps (1/4th of the max framerate of my 1440p 165Hz display) and the experience is much much better than 30fps base.

55fps base + FGx3 is also a great experience. 82fps base + FGx2 might be the best experience in that sense, but there's a point where the benefits from FG become more obvious.
 
If you want maximum graphical settings with maximum frame rate than that is what FG is for, outside of engine rendering speed, if brute force silicon is cost prohibitive, I’m not seeing a path other than AI that seems to have any real impact. To obtain the speeds that DLSS4 is obtaining would require significantly more silicon and power.
Yes, that's what it is for, but you need a certain baseline framerate to make it useable/optimal. This is not a big issue for people buying $700+ GPU's, but for everybody else, getting to that baseline framerate in the first place without notable compromises can be difficult in the heaviest modern games.

You say FG's main use is taking 'low framerates' and boosting them up, but perhaps this is just semantic arguing, but I dont consider 60fps to be a low framerate, and so I'd say it's more like taking already decent framerates and boosting them up.
 
Yes, that's what it is for, but you need a certain baseline framerate to make it useable/optimal. This is not a big issue for people buying $700+ GPU's, but for everybody else, getting to that baseline framerate in the first place without notable compromises can be difficult in the heaviest modern games.

You say FG's main use is taking 'low framerates' and boosting them up, but perhaps this is just semantic arguing, but I dont consider 60fps to be a low framerate, and so I'd say it's more like taking already decent framerates and boosting them up.
I mean,
If you can’t achieve close to 60fps with some sort of upscaling algorithm, yes settings will need to come down. From there FG can take off.

I’m not sure what other options are available other than game developers finding various ways to render differently.

From some perspectives it just seems relative. If game developers allow a game to scale to path tracing or not, and how much of the mainstream crowd is willing to pay the price point to achieve that level of fidelity. It’s a complex question that, of course we would like everyone to have maximum experience, but the economics can’t seem to support that.

I’m not sure of a way around it.
 
If we ignoring FG (and upscaling) then there is no future for high speed displays and higher resolution anymore.

There was never any future for those displays with raw frames. Game developers won’t be targeting 4K 360hz anytime soon (maybe never). Upscaling and framegen are the only option unless you’re playing counterstrike.
 
the thing is that nVidia has pointed out that the RTX 5000 series are running games with lower latency and fewer artifacts, while adding much more framerates, and that's true.

Linus has been playing games on a RTX 5090 and a RTX 4090, and you can see how it went....

BRweytc.png


High latency, where?

No company is going to focus on raster anymore because you end up with a machine that sucks 4000W like the most powerful hairdryer, it doesn't fit in your entire room, and that still doesn't natively run PT at more than 30 FPS.
Huh? They are both using frame gen and the 5090 should have a base framerate thats like 30% higher..... Obviously the latency would be lower? I'm not sure what you're trying to prove with this screenshot but, it's not what you think.... The additional latency of the 3rd and 4th frame generated is like 6ms. As for running with fewer artifacts, based on the video we've seen so far, thats not even remotely true. mfg has significantly more artifacts than the standard fg.
 
I mean,
If you can’t achieve close to 60fps with some sort of upscaling algorithm, yes settings will need to come down. From there FG can take off.
And that's my point. To repeat what I said initially - what I think a lot of people want from frame generation is to get to higher framerates without having to make these settings compromises. Its utility is a lot more limited if it doesn't work at lower framerates or requires people to sacrifice RT or resolution or other notable visual settings to be able to use it.

I could again bring up how this wouldn't be such an issue if GPU's were simply named+priced better, bringing the cost threshold for the performance needed to use FG down to a more palatable level. I think people would be more enthusiastic for FG in this situation, much like the gripes about the performance cost of using ray tracing would go down if it simply didn't cost so much money to use it without big compromises in resolution or performance.
 
Ps: reflex or antilag don't really matter honestly, since you can activate them with or without frame gen active, so the base framerate will always have an advantage.
Except that, at least in Reflex's case, turning it on will often lower your fps. So what is the base we should be looking at? The base with the most frames, or the one with the best latency? And if the one with the best latency has the lower framerate, how can we equate all performance to framerate.

I keep seeing people say things like "framerate always meant performance" and "latency was always tied to framerate" but I don't think that's the case. Some games have forced mouse smoothing on, which has a larger negative impact on how a game feels to me than frame generation. And latency is wildly variable by game. But, and I think this is the most important thing, we used to just look at frame rates as the end all be all. And then, somewhere in the Geforce 3/4 + Radeon 8500 era, we started getting a little more detailed reviews talking about image quality. And more recently, we have have added things like 1% lows and frametimes. Reviews are going to have to evolve, yes, but they have in the past as well.
So no, the vast majority of PC gamers who will be buying 50 series parts in general are NOT going to have displays that are above 120/144hz. Not even remotely close.
I don't know, I think 165hz is pretty common in budget monitors nowadays. Also, going from 120hz to 144hz is a 20% increase in refresh rate, which I think is a tangible uplift in many cases. I mean, look at all of the people claiming that the 1+ frame in latency incurred by frame gen is a deal breaker, which often adds less than 20% end to end latency.
 
I think the main use case is taking low frame rates and propping them up. We can always achieve super high frame rates by lowering settings.

That's the marketed use case but not how it plays out in practice. The input lag penalty when using a low base frame rate is so high you'd never want to use it. It also introduces more motion artefacts. Under no circumstances, I'd recommend anyone take a 30fps output and run FG on it.
 
That's the marketed use case but not how it plays out in practice. The input lag penalty when using a low base frame rate is so high you'd never want to use it. It also introduces more motion artefacts. Under no circumstances, I'd recommend anyone take a 30fps output and run FG on it.
Yea we may be having a discussion of what constitutes as low frame rate, I would agree anything less than 60 would be painful for FG to make up for. But for the sake of discussion this is worth testing on Blackwell to see what the behaviour is compared to past FG algorithms.
 
There was never any future for those displays with raw frames. Game developers won’t be targeting 4K 360hz anytime soon (maybe never). Upscaling and framegen are the only option unless you’re playing counterstrike.
But "raw frames" are not the problem. I only read that latency is the only relevant metric. And we should base rendering solely on the impact of latency.

Here is an example from Black Myth with 4K+DLSS Performance+FG+Reflex vs. pure raw real frames without Reflex:

Latency from the overlay was 45ms vs. 80ms for the fake rendering. The discussion lost its whole purpose because there is software outside of rendering that impact latency more than just higher FPS.
 
I'm going to try one more time to steer this back on topic. My intention was very much not to start a general discussion about the pros and cons of frame gen. As I've said in every single post I've made, frame gen is great tech to have available and obviously in a lot of cases it's great to get additional smoothness. I'll state - yet again - that it's normally a better way to fill out additional vblanks on high refresh monitors than just repeating previous frames.

Everyone instead seems to have gotten fixated on the notion that the problem with it is that it adds some latency, and then arguing about whether it's "worth it" vs. the motion clarity improvements. This is a strawman to the original discussion. We can split it off to a separate thread if anyone actually wants to have this argument, but it's not an argument I'm making, and it's somewhat off-topic to the point that I made here specifically because it relates to the press. Also no one is saying that motion clarity doesn't matter; that is another strawman. I'll try one last time:

Generally the expectation if you report "card A at 120fps and card B at 60fps" (or 60 vs 30, or whatever) is that card A is notably more responsive than card B. Frame generation - even if it were completely free, had identical latency to the base frame rate and zero artifacts - breaks these assumptions in a major way. Since the feel of a game is pretty indisputably an important part of why we measure these things (of course there are diminishing returns, just like there are with motion smoothness!), that sort of reporting seems like a problem to me. If we are going to repurpose "FPS" to speak only to motion smoothness then we need to more consistently report a different metric for responsiveness.

As this is the DF thread, I'd propose we try and keep the discussion related to media reporting and benchmarking of FG cases rather than all the separate discussions, although we can start threads for those as desired. I just see people increasingly fighting strawmen and arguing past each other here in a way that isn't making any forward progress.
 
Spin off a new thread....

It's very hard to separate motion smoothness which FG provides from just lowered input latency because ultimately, the colloquial phrase 'gameplay is smooth' refers to a combination of input responsiveness and motion clarity. Your scenario would be more like, does a gamer prefer 60fps + FG and graphics turned up or 120fps + no FG and graphics turned down. And on that you're now in the realm of type of game, type of system, subjective preference and so on.

As for measuring, as I stated already you can ideally separate native FPS and FG enabled FPS on the counters. Whether IHV's are willing to that is the question. Perhaps game engines can do it if IHV's won't?
 
Generally the expectation if you report "card A at 120fps and card B at 60fps" (or 60 vs 30, or whatever) is that card A is notably more responsive than card B. Frame generation - even if it were completely free, had identical latency to the base frame rate and zero artifacts - breaks these assumptions in a major way. Since the feel of a game is pretty indisputably an important part of why we measure these things (of course there are diminishing returns, just like there are with motion smoothness!), that sort of reporting seems like a problem to me. If we are going to repurpose "FPS" to speak only to motion smoothness then we need to more consistently report a different metric for responsiveness.
It also gets a little weirder if you are CPU bound and your GPU is FG above that.

I think from a technical perspective, I agree that this forum needs to get on the same page to call a spade a spade. Frames per second is a technical definition that encompasses both the CPU and GPU together to create that frame time. We will need a new term for hallucinated frames.

I'm not sure if you want to call it somethign else? like Rendered Frames per Second, or AIFPS, HFPS, etc. so that we can denote things going forward. I suppose the community can come to decision on the easiest naming convention forward.

We definitely need AIFPS. I don't think we should be debating that, but we shouldn't adjust our definition of what we consider FPS. We know what it means, and have known for some time - benchmarking has made that clear for some time.

But as a side note, the gaming community needs to get on board with AIFPS as being a path forward. If the size of the new AMD cards are any indication of things, you can see here that greed is not the factor of why these cards are costing so much. We need desperately a new mode of rendering that is going to keep power and costs lower.
 
It also gets a little weirder if you are CPU bound and your GPU is FG above that.

I think from a technical perspective, I agree that this forum needs to get on the same page to call a spade a spade. Frames per second is a technical definition that encompasses both the CPU and GPU together to create that frame time. We will need a new term for hallucinated frames.
That is something, which i dont understand: Can a frame only include information from the same "timeline"? What happens when a frame contains information from the past frame(s)? Isnt this just a "hallucinated frame", too?
 
Back
Top