CES 2025 Thread (AMD, Intel, Nvidia, and others!)

MOD MODE: I love the conversation about technically evaluating AI upscaling performance, so I slung it out into this thread here: https://forum.beyond3d.com/threads/spinoff-technical-evaluation-of-upscaler-performance.63915/ It was hard to decide where to split it, so I left all the conversation about "what is the better use of transistors" in here.

ALSO MOD MODE: This is to nobody in particular, but also everybody: please make sure to keep the conversation rooted in your perspectives on the technology rather than your differences with the people in the thread. It's getting a little warm in here, and honestly I would expect some vigorous discussion on this topic of where to spend transistors because there will never be one right and true answer. Just make sure to stick to the topic and keep way from being angry at anyone; they have a right to their opinion just as you have a right to yours.

Back to just me posting: I suspect the raw RT power budget will continue to increase, only because I doubt anyone thinks "we have this solved now." At the same time, I'm under no illusions that the move to more AI will continue in earnest for the foreseeable future. The whole reason DLSS and FG came about was as a nod to a general lack of undelying brute force to get "full frames" onto our screens quickly, and the transistor cost was cheaper to upscale and frame-generate rather than actually try to brute force it.

As for this whole "fake frames" thing -- I still don't agree with this stance any more than I disagree with DLSS. Let's start with DLSS tho: everyone should remember how early DLSS versions were plagued with artifacts, severe object ghosting under motion being the prime artifact to my memory, but others included unstable / swimming effect on otherwise regular patterns, or ringing artifacts along very thin high contrast edges or point lights. A few of those still exist depending on settings, however I'm of the opinion that DLSS 3 and later have largely resolved these issues. I'm very happy with modern DLSS, because it allows even my 4090 to deliver more frames per second at the very absurdly highest graphical settings, at measurably lower power, with (what I perceive as) excellent image quality.

Now let's bring it back to frame generation: with DLSS enabled even at a quality preset, you're still "generating fake frames", just in a fractional format. IIRC DLSS quality is 75% native (I bet I'm off by a tad, but whatever) so you're already generating 33% "fake frames" (remember the math would be the percentage UP from native, so 75% to 100% is a 33.333% bump...) Using the DLSS balanced mode I think is 50% of native, so literally HALF of all displayed frames are "fake" at this point. Those "fake frames" are still subject to the same possibilities for creating artifacts for items coming on screen (eg the "real" raster is missing the new object by a pixel or two, but the upscaled image generates those pixels without knowing there should be an object present...) Funny thing is, NVIDIA's framegen is still using the same underlying detail as DLSS in terms of motion vectors, so it still understands how things are moving in a depth-parallax sense. Is it prone to more artifacts, like DLSS v1 was? You bet. Is it going to continue to get tuned over the next five years, like DLSS 1 was? Yup, it certainly is.

So, if you think you hate "fake frames", please at least consider remaining logically consistent and disable DLSS. Thanks ;)
I think the larger issue people have with using FG for benchmarking purposes is because those frames don't advance the game state. The artifacts seem to be fairly unbothersome to the vast majority.

WRT AMD, is the quality deficit of their FG noticeable relative to Nvidia without freeze framing? Assuming we equalize the upscaling tech of course.
 
I think the larger issue people have with using FG for benchmarking purposes is because those frames don't advance the game state.
Elucidate further on this statement for me... What do you mean when you say FG doesn't "advance the game state"?
 
WRT AMD, is the quality deficit of their FG noticeable relative to Nvidia without freeze framing?
Generally I would say no. FSR FG's issue is more with how finicky it can be to run without issues (microstutters and/or "runt frames") on a wide range of h/w and games. Some fare better than others here while with DLSS it is mostly the same configuration approach for every card and title (namely enable Gsync and force vsync in the driver, you're done).
 
Generally I would say no
According to Digital Foundry, DLSS FG still retains the visual quality crown compared to FSR FG, by providing less errors (timestamped in the video). At any rate AMD said they are doing FSR4 FG through machine learning .. if FSR3 FG was capable enough, AMD wouldn't have upgraded it to machine learning in FSR4.

 
According to Digital Foundry, DLSS FG still retains the visual quality crown compared to FSR FG, by providing less errors (timestamped in the video).
While this is true it is generally hard to notice not only the difference but these errors at all when you're actually playing with 100+ fps.
What can be noticeable are recurring issues like the HUD glitches or the breakup of regular patterns when camera is panning over them slowly (usually happen in cutscenes).
FSR FG has a mode which essentially solves the issues with the HUD (albeit it does cost more performance AFAIR which is why it's not used universally) but the latter can happen on both and I wouldn't say that DLSS has a noticeable advantage on that.

if FSR3 FG was capable enough, AMD wouldn't have upgraded it to machine learning in FSR4
We haven't heard anything on FG in FSR4 yet. I think its safe to assume that the ML component is only for SR for now.
 
Elucidate further on this statement for me... What do you mean when you say FG doesn't "advance the game state"?
The game logic is completed in the update() function.
The render() function runs after update and is responsible for rendering the image to the screen. Typically a frame is considered an update() followed by a render().

With FG, it renders 3 frames between the previous and current frame. Meaning there is no update() happening between them. So you experience latency and also what appears to be reactive isn’t really reactive because no update() (game logic has not changed) has occurred.

Less of an issue the higher your base frame rate is, much larger of an issue the slower your base frame rate is.

all FG is doing is smoothening the motion between updates taking some additional latency as compromise. They aren’t technically frames.

That being said there is still value in that as well. But not in the same way it’s marketed by nvidia. Making frame rates on a 5070 run as smoothly as a 4090, is not the same idea that it’s performing like a 4090. Each frame on a 4090 would be considered not generated.

I could be wrong though, so I guess I gotta see the data. Still not sure how that comparison worked out
 
Last edited:
I thin it depends on how games do the loop. Some games do not tie the game logic with renderings, so it's not necessarily "update and render".
Of course, with FG it does not really matter how the games do the loop because the interpolated frames depends only on the "real" frames and are not affected by game logics, even if the game logics are not tied with the frame rate.
So let's say a game keeps its game logic at 60 fps but the rendering frame rate fluctuates. If the base frame rate is higher than 60fps then FG would be very close to "real" frame rates, because the actual rendered frames are interpolated in a sense anyway. On the other hand, if the base frame rate is low (e.g. 30fps or lower) than the frames generated by FG would be more likely to be inconsistent with the actual game states.
 
I thin it depends on how games do the loop. Some games do not tie the game logic with renderings, so it's not necessarily "update and render".
Of course, with FG it does not really matter how the games do the loop because the interpolated frames depends only on the "real" frames and are not affected by game logics, even if the game logics are not tied with the frame rate.
So let's say a game keeps its game logic at 60 fps but the rendering frame rate fluctuates. If the base frame rate is higher than 60fps then FG would be very close to "real" frame rates, because the actual rendered frames are interpolated in a sense anyway. On the other hand, if the base frame rate is low (e.g. 30fps or lower) than the frames generated by FG would be more likely to be inconsistent with the actual game states.
thats true, I forgot how modern engines have decoupled this entirely. They have all sorts of logic running at various frequencies now.
 
Elucidate further on this statement for me... What do you mean when you say FG doesn't "advance the game state"?
They are frames the algorithm generates and inserts between traditionally rendered frames. These AI frames however don’t offer any of the benefits of higher performance other than increased visual smoothness. In a game where the simulation was tied to framerate, FG would not speed this up like a traditionally rendered higher framerate would.
 
Last edited:
They are frames the algorithm generates and inserts between traditionally rendered frames. These AI frames however don’t offer any of the benefits of higher performance other than increased visual smoothness. In a game were the simulation was tied to framerate, FG would not speed this up like a traditionally rendered higher framerate would.
@pcchen answered all the way in which I would answer.

Further, your reply like others assumes the consumer's PC is even remotely capable of rastering the game with such increased framerate. I feel like we're missing the forest for the trees: the supermajority of use cases for FG (just like DLSS) is the customer's computer simply can't achieve those higher framerates. Why then are we myopically focused on the comparison might be if the customer had some mythological (to them) machine which could magically achieve all their wishes? They're using FG because there's no other way for them to get that framerate, because magic doesn't exist.

Yes, if your computer could somehow muster all the framerate and it never needed FG, then congrats! If, however, your computer can't ever hope to attain those framerates, are you OK with one extra frame of latency as a tradeoff for a massively smoother visual experience? Ultimately the answer is up to you, the customer... Nothing is forcing anyone to turn it on.

Everyone needs to be reminded: raw performance is going to hit a wall much sooner than later, as we can't continue shrinking transistors into infinity nor can we continue piling ever more into the same singular piece of silicon or even graphene. At some point the monolith is unsustainable, and performance will be found in new and different ways. Even with chiplets, the wall still exists...

Frame generation and upscaling are here to stay for a very, very long time.
 
They are frames the algorithm generates and inserts between traditionally rendered frames. These AI frames however don’t offer any of the benefits of higher performance other than increased visual smoothness. In a game were the simulation was tied to framerate, FG would not speed this up like a traditionally rendered higher framerate would.
What simulation is that? If it's simulating something that you see then FG will affect the framerate of that just as anything else which is being rendered.
The only thing which FG doesn't affect is the game's input logic.
Which kinda sends us on an interesting train of thought on how exactly Reflex 2 will work with FG - and will it even work?
 
@pcchen answered all the way in which I would answer.

Further, again this all assumes you even COULD raster the game with increased framerate. I feel like we're missing the forest for the trees: the supermajority of use cases for FG (just like DLSS) is because the customer's computer simply can't achieve those higher framerates. What then does it matter what the comparison might be if the customer had some mythological (to them) machine which could magically achieve all their wishes? They're using FG because there's no other way for them to get that framerate.

So, in essence, if your computer could somehow muster all the framerate you want and you never needed FG, then congrats! If, however, your computer can't ever hope to attain those framerates, are you OK with one extra frame of latency as a tradeoff for a massively smoother experience? Ultimately the answer is up to the customer; nothing is forcing them to turn it on. The reality is performance is going to hit a wall much sooner than later, we can't continue shrinking transistors into infinity nor can we continue piling ever more into the same singular piece of silicon or even graphene. At some point the monolith is unsustainable, and performance will be found in new and different ways.

Frame generation and upscaling are here to stay for a very, very long time.
I’m not saying FG isn’t useful. I just don’t think it should be used to measure performance against traditionally rendered framerates.

What simulation is that? If it's simulating something that you see then FG will affect the framerate of that just as anything else which is being rendered.
The only thing which FG doesn't affect is the game's input logic.
Which kinda sends us on an interesting train of thought on how exactly Reflex 2 will work with FG - and will it even work?
I meant a hypothetical game where the simulation and game state were tied to framerate.
 
Last edited:
I meant a hypothetical game where the simulation and game state were tied to framerate.
Would still be interpolated by FG just fine. You would need to make sure that the game is locked to the framerate it is supposed to run at though. People are using Lossless Scaling FG just for that - to double the fps in games locked at 30/60.
 
I’m not saying FG isn’t useful. I just don’t think it should be used to measure performance against traditionally rendered framerates.
Show me anyone in this thread, or even in this forum, whom you believe disagrees with you. I'm not sure why this statement keeps coming up, as if somehow there are folks here evangelizing for it?
 
This debate happened in the DF 2025 thread.
I don't see anyone arguing that FG framerate should be directly comparable to non-FG framerate, even in that thread. And if it's in that thread, let's keep the argument it in that thread.
 
In a world where GPUs just generate series of whole-frames between game state updates, there's no longer a reason to interpret "game performance" and "visual performance" as a single perception of how many times per second you see updated frames.. but rather now separately, and reported as such.

Visual performance = images seen per second (choppy, smoother, incredibly smooth)
Gameplay performance = input response (laggy, unresponsive, responsive, very responsive)

Does it matter how low the input resolution of reconstructed frames is? No, what matters is how sharp and clear the output image is. Does it matter how many frames are "generated by AI" per second? No, what matters is how responsive the game feels to your inputs. So 240fps can look extremely smooth, there's nothing "not real" about it looking that way.. but the input response becomes the way to more objectively measure performance of the game.

We have ways of measuring all this stuff already. So reviewers and benchmarking needs to instead put far more weight onto the input response side of "performance", since any game could easily just generate tons of frames now. It will be up to players for how responsive they want their games to be in relation to how smoothly they want it presented.
 
Yup, agreed entirely. Technology has now come to the point where visual fidelity isn't necessarily linked to our input. There are plenty of interesting ways to measure this, and one of many smart people in this world will come upon one or more metrics which make it all make sense.

I look forward to it, because I bet it's not far off now. :)
 
Back
Top