MOD MODE: I love the conversation about technically evaluating AI upscaling performance, so I slung it out into this thread here:
https://forum.beyond3d.com/threads/spinoff-technical-evaluation-of-upscaler-performance.63915/ It was hard to decide where to split it, so I left all the conversation about "what is the better use of transistors" in here.
ALSO MOD MODE: This is to nobody in particular, but also everybody: please make sure to keep the conversation rooted in your perspectives on the technology rather than your differences with the people in the thread. It's getting a little warm in here, and honestly I would expect some vigorous discussion on this topic of where to spend transistors because there will never be one right and true answer. Just make sure to stick to the topic and keep way from being angry at anyone; they have a right to their opinion just as you have a right to yours.
Back to just me posting: I suspect the raw RT power budget will continue to increase, only because I doubt anyone thinks "we have this solved now." At the same time, I'm under no illusions that the move to more AI will continue in earnest for the foreseeable future. The whole reason DLSS and FG came about was as a nod to a general lack of undelying brute force to get "full frames" onto our screens quickly, and the transistor cost was cheaper to upscale and frame-generate rather than actually try to brute force it.
As for this whole "fake frames" thing -- I still don't agree with this stance any more than I disagree with DLSS. Let's start with DLSS tho: everyone should remember how early DLSS versions were plagued with artifacts, severe object ghosting under motion being the prime artifact to my memory, but others included unstable / swimming effect on otherwise regular patterns, or ringing artifacts along very thin high contrast edges or point lights. A few of those still exist depending on settings, however I'm of the opinion that DLSS 3 and later have largely resolved these issues. I'm very happy with modern DLSS, because it allows even my 4090 to deliver more frames per second at the very absurdly highest graphical settings, at measurably lower power, with (what I perceive as) excellent image quality.
Now let's bring it back to frame generation: with DLSS enabled even at a quality preset, you're still "generating fake frames", just in a fractional format. IIRC DLSS quality is 75% native (I bet I'm off by a tad, but whatever) so you're already generating 33% "fake frames" (remember the math would be the percentage UP from native, so 75% to 100% is a 33.333% bump...) Using the DLSS balanced mode I think is 50% of native, so literally HALF of all displayed frames are "fake" at this point. Those "fake frames" are still subject to the same possibilities for creating artifacts for items coming on screen (eg the "real" raster is missing the new object by a pixel or two, but the upscaled image generates those pixels without knowing there should be an object present...) Funny thing is, NVIDIA's framegen is still using the same underlying detail as DLSS in terms of motion vectors, so it still understands how things are moving in a depth-parallax sense. Is it prone to more artifacts, like DLSS v1 was? You bet. Is it going to continue to get tuned over the next five years, like DLSS 1 was? Yup, it certainly is.
So, if you think you hate "fake frames", please at least consider remaining logically consistent and disable DLSS. Thanks