CES 2025 Thread (AMD, Intel, Nvidia, and others!)

So 4k dlss performance is around 90 fps and 25ms of latency on 5090, and with max frame gen it jumps to 280 fps and 45ms of latency. Really interesting trade. For an esports game I would lower settings to lower latency first. For Cyberpunk it's less clear. With a 120Hz display, I'd just skip frame gen. With a ~180Hz, 240Hz, 360Hz, 480Hz I'd be progressively more interested in using frame gen.

I’m shooting for a sweet spot of 120fps DLDSR + DLSSQ + 2x framegen @ 240hz. There’s no way my 5800X3D is doing much more than that anyway.

seems to suggest that comparing 40 series 2x to 50 series 4x is apples to apples and I thought it would be useful to define how exactly that could be strictly true or not. Maybe I misread what comparison was meant.

It’s apples to apples if you assume the same raw framerate so that the inputs are the same.
 
Temporal and spatial upscaling basically decoupled sharpness from output resolution. Now we have to distinguish between "native" and upscaled. Frame gen is now decoupling smoothness from responsiveness. Essentially these are all different levers now.
And software like Reflex has decoupled responsiveness from smoothness.
 
...seems to suggest that comparing 40 series 2x to 50 series 4x is apples to apples and I thought it would be useful to define how exactly that could be strictly true or not. Maybe I misread what comparison was meant.

It’s apples to apples if you assume the same raw framerate so that the inputs are the same.

My point was to compare NVIDIA's "old" framgen versus "new" framegen. They're telling us the DLSS4:FG creates more and better frames, so how do we quantify and qualify these claims against the prior tech? As has been described by several members (I very much like the thought @Scott_Arm has put into it) we need to figure out which variables are the important ones, and further figure out how to enable data collection and evaluation of those variables.
 
seems to suggest that comparing 40 series 2x to 50 series 4x is apples to apples and I thought it would be useful to define how exactly that could be strictly true or not. Maybe I misread what comparison was meant.
It’s literally doubling the amount of interpolation, how on earth is that apples to apples?
 
Or perhaps a little more specifically, the 5000 series driver :)
Yea we would need to see some cuda benchmarks to determine if or how fast the 5000 series tensor cores are. But I agree, it’s not going to be that much faster unless they solved a bandwidth bottleneck problem.
 
The ratio of rendered frames to interpolated frames halves. This is not the same as doubling framerate.
Framerate is how many unique frames are shown on a display device per time period (typically we would measure those in frames per second.) That's it, that's the whole definition. There's no specificity in the definition to require every one of those frames to be created in the same way, only that they're unique from one another.
It would be apples to apples if both products were compared using 2x FG.
Just as comparing antialising between 2x, 4x, 6x, 8x, TAA, MSAA, SSAA, RGSSAA are all "comparable", and DLAA modes between performance, balanced, and quality, so too is varying levels of frame generation when talking about how many frames are generated. It's literally EXACTLY the same as comparing DLSS models, as the frame data is being generated from an algorithm starting from a smaller dataset. In the end, a reasonable comparison can and should be made when determing if "more generated frames are better" just like asking if more and different AA samples and DLAA modes are better.

This whole hangup on "fake frames" really needs to stop; it's a statement borne out of ignorance and emotion and nothing else. Frame generation is no more "fake" than any other upscaling method, yet somehow temporal upscaling seems to hurt more people's feelings because FPS has somehow become some sort of religion.
 
Framerate is how many unique frames are shown on a display device per time period (typically we would measure those in frames per second.) That's it, that's the whole definition. There's no specificity in the definition to require every one of those frames to be created in the same way, only that they're unique from one another.
My 60Hz TV didn't suddenly become 120Hz because it could enable interpolation. Until DLSS 3 everyone agreed on this. Interpolation has its uses (and I'd argue Nvidia did a good job with it) but I feel we have been convinced a little too much by Nvidia marketing here. It is a good feature, but you cannot convince most people that, for example, you can just infinitely dilute the number of 'real' frames with interpolated frames and it's just as good.

Just as comparing antialising between 2x, 4x, 6x, 8x, TAA, MSAA, SSAA, RGSSAA are all "comparable", and DLAA modes between performance, balanced, and quality, so too is varying levels of frame generation when talking about how many frames are generated. It's literally EXACTLY the same as comparing DLSS models, as the frame data is being generated from an algorithm starting from a smaller dataset. In the end, a reasonable comparison can and should be made when determing if "more generated frames are better" just like asking if more and different AA samples and DLAA modes are better.
Uh yeah if you compared say a 3090 and a 4090 and ran the former on DLSS performance and the latter on DLSS quality I'd say those absolutely are not apples to apples comparisons. Ditto for all of the other graphical options you mentioned. We explicitly normalize for settings when benchmarking, otherwise what's the point?

This whole hangup on "fake frames" really needs to stop; it's a statement borne out of ignorance and emotion and nothing else. Frame generation is no more "fake" than any other upscaling method, yet somehow temporal upscaling seems to hurt more people's feelings because FPS has somehow become some sort of religion.
I can't speak to religion (lol) but there are two extremes to this situation and it's strange. No, FG isn't useless and Nvidia ought to market it (and consumers also ought to heavily weigh it in their purchase decisions since it genuinely is a nice feature. But I also don't get why we have to re-define what 'performance' is to suit this feature. It's motion interpolation. Motion interpolation makes things smoother but it doesn't make a 5070 into a 4090.

If I download Lossless Scaling and use their 4x interpolation feature does my 3080ti suddenly quadruple in performance?
 
The ratio of rendered frames to interpolated frames halves. This is not the same as doubling framerate.

It would be apples to apples if both products were compared using 2x FG.
From my perspective the whether you put 1 frame in between 2 rendered ones, or 3 frames, the time between rendered frames should be constant.

I just see the 5000 series put 3 frames into the same time interval the 4000 series could only put 1.

Maybe the 6000 series inserts 7 more frames, but it would have to do it in the same time intervals as above. Otherwise interpolation is not really useful if we’re increasing the time between rendered frames just to jam in more interpolated frames.
 
My 60Hz TV didn't suddenly become 120Hz because it could enable interpolation.
Actually, it did (Edit: assuming your TV has a 120Hz or higher capable panel and is using it at that refresh rate.) You don't like it, I'm sure, but by definition if it is showing 120 unique frames per second then it's a 120FPS display now, even if it's interpolating from a 60FPS source. 120FPS is the framerate. You can be unhappy with the quality of those 120 frames per second, yet that's a different argument.
Until DLSS 3 everyone agreed on this. Interpolation has its uses (and I'd argue Nvidia did a good job with it) but I feel we have been convinced a little too much by Nvidia marketing here. It is a good feature, but you cannot convince most people that, for example, you can just infinitely dilute the number of 'real' frames with interpolated frames and it's just as good.
Yet again, that's a quality complaint, not a framerate complaint.
Uh yeah if you compared say a 3090 and a 4090 and ran the former on DLSS performance and the latter on DLSS quality I'd say those absolutely are not apples to apples comparisons.
In your contrived example, DLSS performance would have not existed before and now it does -- so then how does it stack up? DLSS is still DLSS, just like FG is still FG. We didn't introduce a completely new technology, and we're still using the same source data. Yet again, framerate is not the same thing as quality.
If I download Lossless Scaling and use their 4x interpolation feature does my 3080ti suddenly quadruple in performance?
It quadruples in framerate. How would you like to measure performance, exactly? There are literally a thousand variables we could come up with to determine what "performance" means, so you'll need to be specific.

If your only performance metric is frames per second, then yeah... You just made your graphics card four times as powerful (/s) if we can only ever use FPS as the singular way of measuring performance. This is why there have been a LOT of moves to stop using FPS as a singular indicator of performance. 1% lows? 0.1% lows? Frame pacing? Input latency? Output latency? All of these should sound familiar to anyone in this forum.
 
Last edited:
My 60Hz TV didn't suddenly become 120Hz because it could enable interpolation. Until DLSS 3 everyone agreed on this. Interpolation has its uses (and I'd argue Nvidia did a good job with it) but I feel we have been convinced a little too much by Nvidia marketing here. It is a good feature, but you cannot convince most people that, for example, you can just infinitely dilute the number of 'real' frames with interpolated frames and it's just as good.

Nobody is trying to convince you that it's "just as good". The point is that you can compare 2x framegen to 4x framegen to 1000x framegen and draw conclusions on performance/quality etc. You can decide for yourself if it's "just as good". It's no different to how opinions have evolved on DLSS over time.

By the way TV interpolation doesn't turn a 60Hz TV into a 120Hz TV. It upscales lower framerate content to match the higher framerate of your TV - exactly like framegen.

Uh yeah if you compared say a 3090 and a 4090 and ran the former on DLSS performance and the latter on DLSS quality I'd say those absolutely are not apples to apples comparisons. Ditto for all of the other graphical options you mentioned. We explicitly normalize for settings when benchmarking, otherwise what's the point?

The point is to compare the resulting image quality just as we do for AA comparisons.

If I download Lossless Scaling and use their 4x interpolation feature does my 3080ti suddenly quadruple in performance?

It quadruples the number of unique frames sent to your monitor. Nobody here is proposing that we should redefine performance.
 
@Cappuccino If your tv interpolates 60fps to 120fps then it would have to be 120Hz refresh rate to display it.

Frame gen creates frames. If you generate 3 frames per every 1 rendered and output at 480 fps, then you every second it’s 480 frames. There’s no real distinction. Your display doesn’t care how they were generated.

Not all frames are equal in terms of image quality, but that can vary in a lot of ways.

In terms of latency, the frames do not change the latency themselves. That’s an issue of queuing before display and any parts of the pipeline that increase latency. Frame gen is one of those things. It adds one extra frame delay plus some other latency for generation. It kind of happens independent of frame rate, because it’s always one extra queued frame plus some other amount of latency, regardless of the base frame rate.

So yes there are differences from rendered frames but they are frames.

Edit: this is essentially a qualitative difference, not a quantitative one. 480 rendered vs 480 generated are both 480 frames quantitatively. But qualitatively they are different. In terms of latency they are quantitatively different.
 
Last edited:
It's also worth reminding folks who may have forgotten: DLSS:FG isn't the same naive implementation like "Lossless scaling" or AMD's FSR-based framegen tech. DLSS:FG has access to depth and motion vectors as well, which means it can (to a moderate degree) solve for parallax artifacts, some amount of surface occlusion, and object motion that isn't necessarily "visible" to the viewport at the time of raster. That doesn't make it perfect, but it makes it sufficiently different from something like LS to provide a better result.

And this isn't a slant to Lossless Scalling, because it absolutely has its uses too, especially since it's display-tech agnostic which means it's awesome for 2D retro games. I'm still really impressed with how well LS seems to work for my 1070MQ in the last half-day I've had to play with it since spending $7 on it :D
 
Actually, it did (Edit: assuming your TV has a 120Hz or higher capable panel and is using it at that refresh rate.) You don't like it, I'm sure, but by definition if it is showing 120 unique frames per second then it's a 120FPS display now, even if it's interpolating from a 60FPS source. 120FPS is the framerate. You can be unhappy with the quality of those 120 frames per second, yet that's a different argument.
Older HDMI 2.0 TVs were incapable of receiving 120Hz signals but still allowed 120Hz with interpolation because it was a 120Hz panel. Nobody called these TVs 120Hz, and if they did they were corrected (at least in a gaming context, internal apps could display a theoretical 120Hz movie without interpolation but those don't exist lol). This is what I meant. HDMI 2.1 TVs with 120Hz panels accept 120Hz signals so that's different.

In your contrived example, DLSS performance would have not existed before and now it does -- so then how does it stack up? DLSS is still DLSS, just like FG is still FG. We didn't introduce a completely new technology, and we're still using the same source data. Yet again, framerate is not the same thing as quality.
This isn't contrived at all, they aren't the same thing lol. We don't benchmark one card on 4x AA and another on 8x AA, we create the same scenario with the same level of quality.

I don't think this will be the case but if MFG was actually significantly worse looking than 2x framegen, do you think this is still a good comparison? When have we ever benchmarked two cards and used different visual quality levels within the same comparison?

It quadruples in framerate. How would you like to measure performance, exactly? There are literally a thousand variables we could come up with to determine what "performance" means, so you'll need to be specific.
Well first off a good way to measure performance is just literally compare them rendering at a specific native render resolution. No upscaling or interpolation, set it to the same graphical setting and see how the actual silicon compares. This isn't a particularly useful benchmark for real use (nobody is using native rendering anymore, at least not for AAA) but it allows us to compare two cards at iso-quality. On top of that yeah I have no problem showing off features like this in additional benchmarks, but my biggest grip is with 'the 5070 is basically a 4090' (nobody here is arguing that but it was the tagline for Nvidia's presentation).

It quadruples the number of unique frames sent to your monitor. Nobody here is proposing that we should redefine performance.
I have seen multiple comments over the past week stating we should explicitly re-define what performance means as we transition away from node shrinks as the primary mode of generational improvement.
Edit: this is essentially a qualitative difference, not a quantitative one. 480 rendered vs 480 generated are both 480 frames quantitatively. But qualitatively they are different. In terms of latency they are quantitatively different.
Yes, and I'm saying if you are benchmarking two cards, you cannot use qualitatively different rendering methods.

An example this forum might get is benchmarking FSR quality vs DLSS quality: we all know they don't look the same. You are way better off benchmarking at iso-quality levels.
 
but my biggest grip is with 'the 5070 is basically a 4090' (nobody here is arguing that but it was the tagline for Nvidia's presentation).
well there's some definitely small text around that statement.
One thing is for sure, it can only be equivalent to 4090 in specific ranges while dlss4 is on.
DLSS4 FG4x, is up to 3 additional frames.

_up to_

As the base rendering speed keeps going faster and faster, the render time is shrinking exponentially. ie, you can't take something at 200fps, and FG4x will make it 800fps. There may be not enough computational time to do it.

Which is an interesting discussion in itself, because a 5090 might be able to take 200fps and make it 800fps. but a 5070 cannot take 200fps and make it 800fps.

So you would be able to benchmark the cards with FG4x on and there would be a difference between a 5070 and a 5090.
 
Or perhaps a little more specifically, the 5000 series driver :)
hmm just reviewing the marketing matierals here.
It might not just be a driver.

It might be related to flip metering but I'm not sold on computational power of 5th generation tensor cores. We definitely need to see some benchmarks for the latter as in my experience the bottleneck for tensor performance is actually getting data into the cores for processing. The actual computation is apparently 1 cycle.
 
@Cappuccino where did “iso-quality” come from in the context of pc hardware reviews? ISO is an actual standard with certification. I don’t really like pretending that’s even remotely what pc hardware reviewers do.

In terms of a tv only accepting a 60Hz signal but interpolating and displaying 120 frames, it’s definitely a 120Hz panel and it’s displaying 120 frames. Half of the frames are just really shitty. Maybe you’d say it’s not a 120 Hz tv, because it’s not 120 Hz end-to-end. Technically it is displaying 120 frames per second.

I don’t disagree that frame gen is different than typical rendering, so it should always have context that explains the quantitative and qualitative differences like latency and artifacts. I just think technical answers are better, and we don’t need to say the frames are “fake” or not frames.

In terms of comparisons, I think things should be compared in terms of how people use them. They may want to see 4x frame gen on one vendor to 2x on another to decide if the image quality differences are reasonable. That could be one of several factors that decide a purchase in either direction. I think the idea that comparison always has to be matched settings could limit information for consumers, though matched settings has a ton of value.
 
Back
Top