CES 2025 Thread (AMD, Intel, Nvidia, and others!)

So 4k dlss performance is around 90 fps and 25ms of latency on 5090, and with max frame gen it jumps to 280 fps and 45ms of latency. Really interesting trade. For an esports game I would lower settings to lower latency first. For Cyberpunk it's less clear. With a 120Hz display, I'd just skip frame gen. With a ~180Hz, 240Hz, 360Hz, 480Hz I'd be progressively more interested in using frame gen.

I’m shooting for a sweet spot of 120fps DLDSR + DLSSQ + 2x framegen @ 240hz. There’s no way my 5800X3D is doing much more than that anyway.

seems to suggest that comparing 40 series 2x to 50 series 4x is apples to apples and I thought it would be useful to define how exactly that could be strictly true or not. Maybe I misread what comparison was meant.

It’s apples to apples if you assume the same raw framerate so that the inputs are the same.
 
Temporal and spatial upscaling basically decoupled sharpness from output resolution. Now we have to distinguish between "native" and upscaled. Frame gen is now decoupling smoothness from responsiveness. Essentially these are all different levers now.
And software like Reflex has decoupled responsiveness from smoothness.
 
...seems to suggest that comparing 40 series 2x to 50 series 4x is apples to apples and I thought it would be useful to define how exactly that could be strictly true or not. Maybe I misread what comparison was meant.

It’s apples to apples if you assume the same raw framerate so that the inputs are the same.

My point was to compare NVIDIA's "old" framgen versus "new" framegen. They're telling us the DLSS4:FG creates more and better frames, so how do we quantify and qualify these claims against the prior tech? As has been described by several members (I very much like the thought @Scott_Arm has put into it) we need to figure out which variables are the important ones, and further figure out how to enable data collection and evaluation of those variables.
 
Back
Top