CES 2025 Thread (AMD, Intel, Nvidia, and others!)

@Cappuccino where did “iso-quality” come from in the context of pc hardware reviews? ISO is an actual standard with certification. I don’t really like pretending that’s even remotely what pc hardware reviewers do.
I don't mean ISO, i mean the word iso, meaning same. When we benchmark two cards we benchmark them at the same quality levels on the same level.

In terms of a tv only accepting a 60Hz signal but interpolating and displaying 120 frames, it’s definitely a 120Hz panel and it’s displaying 120 frames. Half of the frames are just really shitty. Maybe you’d say it’s not a 120 Hz tv, because it’s not 120 Hz end-to-end. Technically it is displaying 120 frames per second.
Technically sure, but nobody actually argued this.

In terms of comparisons, I think things should be compared in terms of how people use them. They may want to see 4x frame gen on one vendor to 2x on another to decide if the image quality differences are reasonable. That could be one of several factors that decide a purchase in either direction. I think the idea that comparison always has to be matched settings could limit information for consumers, though matched settings has a ton of value.
You need to have matched settings for at least a portion to determine which card is actually faster, though.
 
You need to have matched settings for at least a portion to determine which card is actually faster, though.
You’re defining “faster” as the speed at which a frame is produced in a _specific_ way prescribed by you. Although this way was the only way we knew how to generate a frame in the past, that’s no longer the case.

Open your mind. Frames can be produced in other ways (with its own pros and cons). Your narrower definition of “faster” isn’t _wrong_, but going forward it’s not going to be that useful because we are going to get more and more creative about how we generate frames. FG is just the beginning.

Think about the raster/RT rendering pipeline in a different way. Its job isn’t to produce frames (not even a subset). Its job is to produce input data for an AI model to ingest and produce the frames that we see. Today that input data is a subset of frames (+ other metadata). Tomorrow it could be some other data structure that makes no sense to us visually, but is a much more efficient way to deliver the sparse information to the NN.

In this future world there will be no way for you to measure performance using your limited metric. You have to compare the quantity and quality of the end result.
 
You’re defining “faster” as the speed at which a frame is produced in a _specific_ way prescribed by you. Although this way was the only way we knew how to generate a frame in the past, that’s no longer the case.
We’ve always had motion interpolation, it just wasn’t very good. We’ve never considered it equivalent to real actual performance.

My mind is plenty opened, almost every comment I leave about framegen I say that it’s good tech and is one of many reasons people buy Nvidia. However it’s not performance in the traditional sense, it’s very good motion smoothing, as we have attempted on TVs for decades.
 
@Cappuccino Frame gen is both smoothing and motion blur reduction.

I think maybe it’s time that reviewers define performance. I guess the standard the look for is something like average smoothness, pacing, latency, frame pacing, power consumption and motion blur controlled for image quality which includes in-game settings, resolution and upscaling.

That’s the best I can come up with to articulate the expectation to this point. So if you improve a number of the performance metrics without regression to the others you have increased overall performance. You can lower image quality to increase performance but when comparing performance image quality should be very close to equal.

I think the point of defining performance this way is it’s not reliant on frames. We now have frame gen which increases frame rate but causes a latency regression which would betray the definition. It will be better to measure latency and all of the other performance metrics directly. In some cases that will still be frame time, like average smoothness and motion blur.

I do think it’s tricky still. I I run uncapped and my gpu hits 100% that decreases frame times but increases latency. If I frame cap I can increase frame times slightly but lower latency and improve frame pacing. Is that better or worse performance? Same with reflex. Lower latency but tiny bit higher average frame time and worse frame pacing.
 
Last edited:
We’ve always had motion interpolation, it just wasn’t very good. We’ve never considered it equivalent to real actual performance.

My mind is plenty opened, almost every comment I leave about framegen I say that it’s good tech and is one of many reasons people buy Nvidia. However it’s not performance in the traditional sense, it’s very good motion smoothing, as we have attempted on TVs for decades.
My point wasn't about whether framegen's output was good or bad. It's about whether generated frames are a measure of performance.

Did you read the second half of my post? What if in a future GPU the traditional/RT pipeline is only used to generate intermediate metadata that a neural-network then ingests to generate *all* frames? Would you measure such a GPU's performance as 0? You can, if you insist on "performance" being defined my brute-force generated frames, but it's not a particularly useful metric.
 
@Cappuccino Frame gen is both smoothing and motion blur reduction.

I think maybe it’s time that reviewers define performance. I guess the standard the look for is something like average smoothness, pacing, latency, frame pacing, power consumption and motion blur controlled for image quality which includes in-game settings, resolution and upscaling.

That’s the best I can come up with to articulate the expectation to this point. So if you improve a number of the performance metrics without regression to the others you have increased overall performance. You can lower image quality to increase performance but when comparing performance image quality should be very close to equal.

I think the point of defining performance this way is it’s not reliant on frames. We now have frame gen which increases frame rate but causes a latency regression which would betray the definition. It will be better to measure latency and all of the other performance metrics directly. In some cases that will still be frame time, like average smoothness and motion blur.

I do think it’s tricky still. I I run uncapped and my gpu hits 100% that decreases frame times but increases latency. If I frame cap I can increase frame times slightly but lower latency and improve frame pacing. Is that better or worse performance? Same with reflex. Lower latency but tiny bit higher average frame time and worse frame pacing.
Sure yeah motion smoothing and blur reduction. I agree.
 
My point wasn't about whether framegen's output was good or bad. It's about whether generated frames are a measure of performance.
Okay, but that wasn't what my point was about really. It's about whether or not you can compare FG to MFG without a giant caveat that the only reason 'performance' 'doubled' is due to increased interpolation.

Did you read the second half of my post? What if in a future GPU the traditional/RT pipeline is only used to generate intermediate metadata that a neural-network then ingests to generate *all* frames? Would you measure such a GPU's performance as 0? You can, if you insist on "performance" being defined my brute-force generated frames, but it's not a particularly useful metric.
I have literally no idea how we would judge GPUs in this incredibly theoretical and out there scenario. I find it funny that you call traditional performance a 'not particularly useful metric' but we're talking about a future where entire games are generated with AI? This is fun speculation but not really relevant to the current industry.
 
I have literally no idea how we would judge GPUs in this incredibly theoretical and out there scenario. I find it funny that you call traditional performance a 'not particularly useful metric' but we're talking about a future where entire games are generated with AI? This is fun speculation but not really relevant to the current industry.
I think you misunderstood. I wasn't talking about an "entire game" being AI-generated -- the entire game engine + rendering pipeline works as it does today, but they would output some intermediate data that the AI composes into RGB pixels. Even today with SR + 4xMFG only a tiny fraction (3%) of pixels are generated with brute force. Everything else is synthesized. And actually because DLSS-SR aggregates pixels from multiple frames to produce the final frame, you could argue that 100% of the image is already AI-generated. I'm suggesting a future in which those 3% aren't actually RGB pixels, but maybe intermediate G-buffer + MV data or something. It probably doesn't make sense to do that with today's algorithms, but it could in future. So this insistence on a "frame" being something sacrosanct that's generated using non-AI methods just doesn't make sense, not even today.
 
I have seen multiple comments over the past week stating we should explicitly re-define what performance means as we transition away from node shrinks as the primary mode of generational improvement.

What I’ve seen is people debating how to quantify or qualify the trade offs involved with framegen. I haven’t seen anyone claim we should pretend framegen is equivalent to the current definition of “fps” which is what you seem to be arguing against.

Yes, and I'm saying if you are benchmarking two cards, you cannot use qualitatively different rendering methods.

You’re laser focused on iso-quality “fps” and are sort of missing the point. It is perfectly fine for a reviewer to evaluate IQ at different settings. This was extremely common during the AA and aniso “wars”. I don’t see anyone arguing that we should accept 240fps native = 240fps framegen. Everyone knows it’s not the same thing but that doesn’t mean we can’t compare them and draw conclusions.

An example this forum might get is benchmarking FSR quality vs DLSS quality: we all know they don't look the same. You are way better off benchmarking at iso-quality levels.

How do you do that when all cards don’t use the same upscaling method and therefore iso-quality is impossible to achieve?
 
All that matters to the end users is what they see.
Not how is was "created"
That fallacy is leading nowhere, but expect to hear it used, until AMD/Intel does the same.
 
You’re laser focused on iso-quality “fps” and are sort of missing the point. It is perfectly fine for a reviewer to evaluate IQ at different settings. This was extremely common during the AA and aniso “wars”. I don’t see anyone arguing that we should accept 240fps native = 240fps framegen. Everyone knows it’s not the same thing but that doesn’t mean we can’t compare them and draw conclusions.
Evaluating is fine but if you are going to compare two cards head to head you kinda have to normalize settings and visuals between them.
How do you do that when all cards don’t use the same upscaling method and therefore iso-quality is impossible to achieve?
For head to head reviews you include a section without upscaling, which is what every reviewer worth their salt does.

When HUB tries to compare two cards, one with FSRQ and one with DLSSQ people usually say it’s worthless because those don’t look the same! Who cares if the frame rate is similar if one is showing a worse picture?

All that matters to the end users is what they see.
Not how is was "created"
That fallacy is leading nowhere, but expect to hear it used, until AMD/Intel does the same.
So some comments are saying I’m arguing against nobody, that nobody says generated frames are equivalent to real frames, yet people are literally saying “it doesn’t matter how a frame is generated”.

AMD and Intel literally do this too lol. Still don’t think using FG magically doubles frame rate.
 
...

So some comments are saying I’m arguing against nobody, that nobody says generated frames are equivalent to real frames, yet people are literally saying “it doesn’t matter how a frame is generated”.

AMD and Intel literally do this too lol. Still don’t think using FG magically doubles frame rate.

It quite literally doubles frame rate. "Normal" frames are generated through a render pipeline. Frame generation uses interpolation to generate frames. They're all frames. It's doubled. In terms of what's going to your display, they're all unique frames. They're just qualitatively different because of the way they were generated.
 
When HUB tries to compare two cards, one with FSRQ and one with DLSSQ people usually say it’s worthless because those don’t look the same! Who cares if the frame rate is similar if one is showing a worse picture?
And yet again, repeating myself: framerate is not the same as quality. and it isn't the same thing as performance. This goes right back to @trinibwoy 's comment around the "AA and ansio wars" of the early and mid 2000's, when certain cards had pretty crap implementations of both, resulting in potentially increased framerate at the expense of visibly decreased image quality -- but it was still a higher framerate. Does that mean one was more "performant" than the other? The answer lies in how we decide to define performance.

So some comments are saying I’m arguing against nobody, that nobody says generated frames are equivalent to real frames, yet people are literally saying “it doesn’t matter how a frame is generated”.
As has been described repeatedly and you continually seem to ignore it or side-step it: a frame is a frame is a frame, whether it was rastered, AI interpreted, or a sequence of lossy JPEG files rapidly displayed like a digital rendition of a childs paper flipbook. So long as the individual frames are unique from eachother, then those frames count towards framerate.

And finally, yet again, you ask if it matters how each frame was generated? It certainly can matter when speaking about image quality, which is unlinked from framerate. And can be unlinked from performance, depending on how we decide we want to define performance.

Edit: You know, I have an idea. When one of us has a 50-series card at our disposal, we need to do a video capture with MFG fully enabled and grab a series of about two or three dozen frames. We can splat them all out individually, and the people with remarkably strong opinions about how FAKE frames will be lower quality and "wrong" can then point out the ones which are so obviously fake. We can do it double-blind, so the person who does the capture can tell the answers to another unrelated person beforehand, and a second unrelated person can be given the frames to post without any knowledge of which ones are "real" vs "fake".

I think it would be REALLY interesting to see how many of the frames can be accurately determined to be AI generated vs not.
 
Last edited:
Evaluating is fine but if you are going to compare two cards head to head you kinda have to normalize settings and visuals between them.

Depends on what you're comparing. You seem only interested in comparing framerate and are ignoring everything else that people are trying to discuss.

So some comments are saying I’m arguing against nobody, that nobody says generated frames are equivalent to real frames, yet people are literally saying “it doesn’t matter how a frame is generated”.

You're misrepresenting what's being said. Nobody is saying that the frames are equivalent qualitatively - they are saying the framerate is equivalent which is literally just the number of unique frames sent to a display.
 
From my perspective the whether you put 1 frame in between 2 rendered ones, or 3 frames, the time between rendered frames should be constant.

I just see the 5000 series put 3 frames into the same time interval the 4000 series could only put 1.

Maybe the 6000 series inserts 7 more frames, but it would have to do it in the same time intervals as above. Otherwise interpolation is not really useful if we’re increasing the time between rendered frames just to jam in more interpolated frames.

I may be misunderstanding something here but surely in the case where 3 generated frames are inserted vs 1, then the 2 real frames on either side would be displayed for a much shorter period than the standard frame gen scenario to maintain proper frame pacing? So the time on screen ratio between real and generated frames would still be less favourable?
 
So, let's expand iroboto's statement there to make sure we're all talking about the same thing.

Without FG enabled:
Frame 1 is rastered and displayed at 0.0000 seconds (the clock starts here.)
Frame 2 is rastered and displayed at 0.0167 seconds.
Frame 3 is rastered and displayed at 0.0333 seconds.

With 40-series FG enabled:
Frame 1 is rastered and displayed at 0.0000 seconds (the clock starts here.)
Frame 2 is AI-generated and displayed at 0.0083 seconds.
Frame 3 is rastered and displayed at 0.0167 seconds. (the wall-clock time between rastered frames remains constant)
Frame 4 is AI-generated at displayed at 0.025 seconds.
Frame 5 is rastered and displayed at 0.0333 seconds (again, wall-clock time between rastered frames remains constant)

Ostensibly, with 50-series MLG enabled:
Frame 1 is rastered and displayed at 0.0000 seconds (the clock starts here.)
Frame 2 is AI-generated and displayed at 0.0041 seconds.
Frame 3 is AI-generated and displayed at 0.0083 seconds.
Frame 4 is AI-generated and displayed at 0.0125 seconds.
Frame 5 is rastered and displayed at 0.0167 seconds. (the wall-clock time between rastered frames remains constant)
Frame 6 is AI-generated and displayed at 0.0208 seconds.
Frame 7 is AI-generated at displayed at 0.025 seconds.
Frame 8 is AI-generated and displayed at 0.0291 seconds.
Frame 9 is rastered and displayed at 0.0333 seconds (again, wall-clock time between rastered frames remains constant)

Now, the question you've raised, which is a good one, is how do we know frame pacing actually comes out so nicely as the example I typed above? And honestly, I don't know the answer to that. Since the DLSS algorithm has previous reference frames, it has at least some sense of rastered frame timing in the last dozen milliseconds, so it should have a good way to estimate the necessary frame pacing for the AI-generated frames. Remember that DLSS and FG also both have motion vector data from the rasterized geometry, so the algorithm will use this motion data along with prior rastered frame pacing data to AI-generate frames with a reasonable continuity of object motion as well as overall viewport motion.

I'm quite certain MFG frame pacing is best when the application is locked to a specific FPS value, obviously below the display device's true refresh rate. The backlog of frame data will show a consistent pacing, which means estimation of future frames and object motion therein gets a LOT easier to keep smooth.
 
Last edited:
I may be misunderstanding something here but surely in the case where 3 generated frames are inserted vs 1, then the 2 real frames on either side would be displayed for a much shorter period than the standard frame gen scenario to maintain proper frame pacing? So the time on screen ratio between real and generated frames would still be less favourable?

Yea you got it.
If we assume 60fps is the original update without FG, then you're looking at 16.6ms for each rendered frame to arrive.
You MFG, inserting 3 frames inbetween the 16.6, and now you're displaying each frame for only 4.2ms. Diagram below.
Yellow and Green are the real rendered frames. You would see the real frames less, yea, the eye is likely unable to discern this because as the framerate goes up, it's just going to be a blur.

The reason why frame pacing will be acceptable is because of the flip metering hardware in the display chipset for the 5000 series. This is why MFG is restricted to 5000 series and not available for 4000 series.


Screenshot 2025-01-16 at 2.44.47 PM.png

Flip Metering is in their marketing videos:
 
Since the DLSS algorithm has previous reference frames, it has at least some sense of rastered frame timing in the last dozen milliseconds, so it should have a good way to estimate the necessary frame pacing for the AI-generated frames.
FG works by interpolating between two rendered frames. Their frametimes are known in advance so it is easy to calculate the point(s) between them where a generated frame(s) should be inserted.
 
Awesome excel chart :) Also, just wanted to put a very fine tip on your statement here:
You would see the real frames less, yea, the eye is likely unable to discern this because as the framerate goes up, it's just going to be a blur.
What @iroboto is explaining here, in the bolded comment, is a blur to your human eye. This isn't to suggest or even insinuate the frames themselves are blurry, in fact they'll be equally well detailed as the past two rasterized frames.

That detail also doesn't suggest or insinuate they'll be exactly correct in terms of object permanence, or viewport-edge object visibility, or similar possible types of artifacts stemming from motion estimation and projection which bleeds either into or out of the viewable frustum. Still, the generated frames will absolutely have detail.
 
Back
Top