CES 2025 Thread (AMD, Intel, Nvidia, and others!)

Reflex 2 Frame Warp is going to muddy the waters around frame rate, input latency, and game state update loop performance even more. It allows mouse input to be translated into camera position and provided to the GPU which can use it to warp the frame according to the new camera position without the game engine's state update loop.
 

DF Direct Weekly #196 - CES 2025 Special! - Nvidia, AMD, Intel Highlights + More!​


Nice sections on Mega geometry, ReSTIR PT vs Lumen GI and the insane Black State demo. Their overall feeling is that Nvidia is 5 years ahead in gaming AI software compared to AMD (difficult to argue).
 
Last edited:
Wow, sounds like Nvidia hit another one out of the park. I don’t see anyone catching up soon, they are just killing it with technology.
 
Show me anyone in this thread, or even in this forum, whom you believe disagrees with you. I'm not sure why this statement keeps coming up, as if somehow there are folks here evangelizing for it?
Likely not on this forum but it's been a core part of Nvidia's marketing since 2022.
 
Wow, sounds like Nvidia hit another one out of the park. I don’t see anyone catching up soon, they are just killing it with technology.
It will be interesting to see what happens with the next round of consoles. I doubt AMD will catch up in time for whatever is going to go in the PS6. Are we just going to see a huge bifurcation between the tech that PCs and consoles run? I suppose that's always been kinda the case but in recent gens the gap closed significantly.
 
I doubt AMD will catch up in time for whatever is going to go in the PS6.
They are already catching up and will likely do so completely with UDNA if we're talking about h/w.
And as for s/w it is less of an AMD issue wrt consoles and more of console companies issue.
 
Likely not on this forum but it's been a core part of Nvidia's marketing since 2022.
In every performance comparison for the 5000 series, FG output was compared only to FG output. So, while it was certainly true in prior revisions, it isn't true in this one.
 
yeh like he said frame gen to frame gen.
If we are accepting that 'generated frames' aren't equivalent to 'real frames', then how is this comparison valid at all? 4x frames is going to have the same problems as 2x frames but amplified, potentially only slightly amplified but I can't imagine the latency penalty isn't going to increase (and you certainly wouldn't see a proportional decrease in latency given the framerate is 'doubling').
 
but I can't imagine the latency penalty isn't going to increase
The latency change is minimal enough to be invisible while you will certainly notice 2-3x more frames.
There are some concerns on the image stability though as seeing 1-1 rendered and generated frames is one thing but 1-3 where the last three may all have artifacts is another.
 
If we are accepting that 'generated frames' aren't equivalent to 'real frames', then how is this comparison valid at all? 4x frames is going to have the same problems as 2x frames but amplified, potentially only slightly amplified but I can't imagine the latency penalty isn't going to increase (and you certainly wouldn't see a proportional decrease in latency given the framerate is 'doubling').

If we accept that framegen improves motion charity then we can say that 4x “looks” better than 2x. That’s the comparison. Based on the DF analysis it’s very unlikely the average person would notice the latency increase from 2x to 4x but it’s worth highlighting.
 
There are arguments both ways. Comparing single frame gen for latency and quality is more apples to apples. From an actual user perspective comparing one frame gen to three frame gen is valid because it’s a choice a user would make depending on their display. GPU reviewing is just going to get harder.
 
And again, it's an improvement of the FG technology, which means we would be comparing to prior versions of the FG technology. I'm not sure how else we'd expect to do any comparison or A/B testing if not new vs old...

I'm VERY interested in seeing what sorts of technology we see deployed to do these comparisons, by the more technically astute reviewers, in order to assess the frames these FG mechanisms produce.
 
Maybe its useful to talk about this in signals terms to come up with a minimal, objectively true statement. "Check my work" on this though. With "traditional rendering", some (if not all) of the equations of the scene are solved for each display pixel on each displayed frame. Thus, the display resolution and frame rate is twice the spatial and temporal bandwidth of a signal that the GPU is capable of generating. With DLSS-style upscaling, the spatial bandwidth can (in some cases) match that of traditional rendering due to sub-pixel jitter and temporal accumulation, but always at the cost of reduced temporal bandwidth. With frame-gen, no new information about the scene is solved for, so the bandwidth is unchanged from that of the rendered frames. A 5070 at some resolution and display rate with 4x frame gen has a smaller bandwidth than a 4090 with 2x frame gen at that same resolution and display rate.

Since not every signal will need the full bandwidth of the 4090, there may be no perceptible difference when that signal is produced with a 5070, even with a different frame-gen multiple. However, games will have signals with high-frequency discontinuities (edges, disocclusions, "light switch" style lighting changes) and those signals will always be better resolved and suffer fewer artifacts on a GPU that can generate a signal with a larger bandwidth.
 
Temporal and spatial upscaling basically decoupled sharpness from output resolution. Now we have to distinguish between "native" and upscaled. Frame gen is now decoupling smoothness from responsiveness. Essentially these are all different levers now.

Edit: Actually it's even more complicated than that, because resolution and frame rate both increase forms of sharpness or clarity. Frame rate reduces motion blur, which impacts sharpness or clarity in motion. Frame rate also impacts smoothness.
 
Last edited:
Since not every signal will need the full bandwidth of the 4090, there may be no perceptible difference when that signal is produced with a 5070, even with a different frame-gen multiple. However, games will have signals with high-frequency discontinuities (edges, disocclusions, "light switch" style lighting changes) and those signals will always be better resolved and suffer fewer artifacts on a GPU that can generate a signal with a larger bandwidth.

Yes, however someone shopping for a 5070 isn’t considering a 4090 or 5090. The decision is running your 5070 at 60fps “native” vs 240fps “fake” where both have the same baseline bandwidth but the fake option has the potential to increase the signal via guesswork.
 

So 4k dlss performance is around 90 fps and 25ms of latency on 5090, and with max frame gen it jumps to 280 fps and 45ms of latency. Really interesting trade. For an esports game I would lower settings to lower latency first. For Cyberpunk it's less clear. With a 120Hz display, I'd just skip frame gen. With a ~180Hz, 240Hz, 360Hz, 480Hz I'd be progressively more interested in using frame gen.
 
Yes, however someone shopping for a 5070 isn’t considering a 4090 or 5090. The decision is running your 5070 at 60fps “native” vs 240fps “fake” where both have the same baseline bandwidth but the fake option has the potential to increase the signal via guesswork.

i agree with that. my comment was scoped around objectively describing the technology itself free from those subjective considerations. The discussion above that starts with
In every performance comparison for the 5000 series, FG output was compared only to FG output. So, while it was certainly true in prior revisions, it isn't true in this one.

seems to suggest that comparing 40 series 2x to 50 series 4x is apples to apples and I thought it would be useful to define how exactly that could be strictly true or not. Maybe I misread what comparison was meant.

I definitely think the most interesting aspect of the new tech is exploring how 240 4x compares to 60 and think that an increase in motion clarity and smoothness could outweigh additional lag. I hope someone does blinded studies on it.
 
Back
Top