CES 2025 Thread (AMD, Intel, Nvidia, and others!)

@Cappuccino where did “iso-quality” come from in the context of pc hardware reviews? ISO is an actual standard with certification. I don’t really like pretending that’s even remotely what pc hardware reviewers do.
I don't mean ISO, i mean the word iso, meaning same. When we benchmark two cards we benchmark them at the same quality levels on the same level.

In terms of a tv only accepting a 60Hz signal but interpolating and displaying 120 frames, it’s definitely a 120Hz panel and it’s displaying 120 frames. Half of the frames are just really shitty. Maybe you’d say it’s not a 120 Hz tv, because it’s not 120 Hz end-to-end. Technically it is displaying 120 frames per second.
Technically sure, but nobody actually argued this.

In terms of comparisons, I think things should be compared in terms of how people use them. They may want to see 4x frame gen on one vendor to 2x on another to decide if the image quality differences are reasonable. That could be one of several factors that decide a purchase in either direction. I think the idea that comparison always has to be matched settings could limit information for consumers, though matched settings has a ton of value.
You need to have matched settings for at least a portion to determine which card is actually faster, though.
 
You need to have matched settings for at least a portion to determine which card is actually faster, though.
You’re defining “faster” as the speed at which a frame is produced in a _specific_ way prescribed by you. Although this way was the only way we knew how to generate a frame in the past, that’s no longer the case.

Open your mind. Frames can be produced in other ways (with its own pros and cons). Your narrower definition of “faster” isn’t _wrong_, but going forward it’s not going to be that useful because we are going to get more and more creative about how we generate frames. FG is just the beginning.

Think about the raster/RT rendering pipeline in a different way. Its job isn’t to produce frames (not even a subset). Its job is to produce input data for an AI model to ingest and produce the frames that we see. Today that input data is a subset of frames (+ other metadata). Tomorrow it could be some other data structure that makes no sense to us visually, but is a much more efficient way to deliver the sparse information to the NN.

In this future world there will be no way for you to measure performance using your limited metric. You have to compare the quantity and quality of the end result.
 
You’re defining “faster” as the speed at which a frame is produced in a _specific_ way prescribed by you. Although this way was the only way we knew how to generate a frame in the past, that’s no longer the case.
We’ve always had motion interpolation, it just wasn’t very good. We’ve never considered it equivalent to real actual performance.

My mind is plenty opened, almost every comment I leave about framegen I say that it’s good tech and is one of many reasons people buy Nvidia. However it’s not performance in the traditional sense, it’s very good motion smoothing, as we have attempted on TVs for decades.
 
@Cappuccino Frame gen is both smoothing and motion blur reduction.

I think maybe it’s time that reviewers define performance. I guess the standard the look for is something like average smoothness, pacing, latency, frame pacing, power consumption and motion blur controlled for image quality which includes in-game settings, resolution and upscaling.

That’s the best I can come up with to articulate the expectation to this point. So if you improve a number of the performance metrics without regression to the others you have increased overall performance. You can lower image quality to increase performance but when comparing performance image quality should be very close to equal.

I think the point of defining performance this way is it’s not reliant on frames. We now have frame gen which increases frame rate but causes a latency regression which would betray the definition. It will be better to measure latency and all of the other performance metrics directly. In some cases that will still be frame time, like average smoothness and motion blur.

I do think it’s tricky still. I I run uncapped and my gpu hits 100% that decreases frame times but increases latency. If I frame cap I can increase frame times slightly but lower latency and improve frame pacing. Is that better or worse performance? Same with reflex. Lower latency but tiny bit higher average frame time and worse frame pacing.
 
Last edited:
We’ve always had motion interpolation, it just wasn’t very good. We’ve never considered it equivalent to real actual performance.

My mind is plenty opened, almost every comment I leave about framegen I say that it’s good tech and is one of many reasons people buy Nvidia. However it’s not performance in the traditional sense, it’s very good motion smoothing, as we have attempted on TVs for decades.
My point wasn't about whether framegen's output was good or bad. It's about whether generated frames are a measure of performance.

Did you read the second half of my post? What if in a future GPU the traditional/RT pipeline is only used to generate intermediate metadata that a neural-network then ingests to generate *all* frames? Would you measure such a GPU's performance as 0? You can, if you insist on "performance" being defined my brute-force generated frames, but it's not a particularly useful metric.
 
@Cappuccino Frame gen is both smoothing and motion blur reduction.

I think maybe it’s time that reviewers define performance. I guess the standard the look for is something like average smoothness, pacing, latency, frame pacing, power consumption and motion blur controlled for image quality which includes in-game settings, resolution and upscaling.

That’s the best I can come up with to articulate the expectation to this point. So if you improve a number of the performance metrics without regression to the others you have increased overall performance. You can lower image quality to increase performance but when comparing performance image quality should be very close to equal.

I think the point of defining performance this way is it’s not reliant on frames. We now have frame gen which increases frame rate but causes a latency regression which would betray the definition. It will be better to measure latency and all of the other performance metrics directly. In some cases that will still be frame time, like average smoothness and motion blur.

I do think it’s tricky still. I I run uncapped and my gpu hits 100% that decreases frame times but increases latency. If I frame cap I can increase frame times slightly but lower latency and improve frame pacing. Is that better or worse performance? Same with reflex. Lower latency but tiny bit higher average frame time and worse frame pacing.
Sure yeah motion smoothing and blur reduction. I agree.
 
My point wasn't about whether framegen's output was good or bad. It's about whether generated frames are a measure of performance.
Okay, but that wasn't what my point was about really. It's about whether or not you can compare FG to MFG without a giant caveat that the only reason 'performance' 'doubled' is due to increased interpolation.

Did you read the second half of my post? What if in a future GPU the traditional/RT pipeline is only used to generate intermediate metadata that a neural-network then ingests to generate *all* frames? Would you measure such a GPU's performance as 0? You can, if you insist on "performance" being defined my brute-force generated frames, but it's not a particularly useful metric.
I have literally no idea how we would judge GPUs in this incredibly theoretical and out there scenario. I find it funny that you call traditional performance a 'not particularly useful metric' but we're talking about a future where entire games are generated with AI? This is fun speculation but not really relevant to the current industry.
 
I have literally no idea how we would judge GPUs in this incredibly theoretical and out there scenario. I find it funny that you call traditional performance a 'not particularly useful metric' but we're talking about a future where entire games are generated with AI? This is fun speculation but not really relevant to the current industry.
I think you misunderstood. I wasn't talking about an "entire game" being AI-generated -- the entire game engine + rendering pipeline works as it does today, but they would output some intermediate data that the AI composes into RGB pixels. Even today with SR + 4xMFG only a tiny fraction (3%) of pixels are generated with brute force. Everything else is synthesized. And actually because DLSS-SR aggregates pixels from multiple frames to produce the final frame, you could argue that 100% of the image is already AI-generated. I'm suggesting a future in which those 3% aren't actually RGB pixels, but maybe intermediate G-buffer + MV data or something. It probably doesn't make sense to do that with today's algorithms, but it could in future. So this insistence on a "frame" being something sacrosanct that's generated using non-AI methods just doesn't make sense, not even today.
 
I have seen multiple comments over the past week stating we should explicitly re-define what performance means as we transition away from node shrinks as the primary mode of generational improvement.

What I’ve seen is people debating how to quantify or qualify the trade offs involved with framegen. I haven’t seen anyone claim we should pretend framegen is equivalent to the current definition of “fps” which is what you seem to be arguing against.

Yes, and I'm saying if you are benchmarking two cards, you cannot use qualitatively different rendering methods.

You’re laser focused on iso-quality “fps” and are sort of missing the point. It is perfectly fine for a reviewer to evaluate IQ at different settings. This was extremely common during the AA and aniso “wars”. I don’t see anyone arguing that we should accept 240fps native = 240fps framegen. Everyone knows it’s not the same thing but that doesn’t mean we can’t compare them and draw conclusions.

An example this forum might get is benchmarking FSR quality vs DLSS quality: we all know they don't look the same. You are way better off benchmarking at iso-quality levels.

How do you do that when all cards don’t use the same upscaling method and therefore iso-quality is impossible to achieve?
 
Back
Top