Digital Foundry Article Technical Discussion [2025]

It just takes control away from the artists/developers and gives it to a black box pre-trained ML model.

Yeah there needs to be a very clear line between neural models trained by developers and packaged with their game versus 3rd party overrides. Neural materials and neural textures seem perfectly harmless as they’re just replacing shader code and traditionally compressed textures. All developer controlled.
 
Are you saying there will be better products of the same tech available by other companies?
Huh? I just mean in general, including Nvidia.

We're still years away from all that stuff becoming implemented in games in any kind of common way. Just saying, people still focusing on specs makes sense. Until all these other things actually take root, and become standard practice(which will require support from other hardware as well, consoles in particular), then raw power is still critical.
 
Imagine Nvidia just makes their own graphics engine team to release an open-source graphics engine that companies can use to make their games.
I think us folks at Beyond3D have a natural inclination to vastly overestimate how much of a modern game engine is related to graphics... mainly because that's what we like to think about and that gets the lions share of the marketing. But it has always been a relatively small part of engines; if you recall in the UE3 days it was quite common for licensees to replace all the rendering parts of Unreal themselves. That has obviously become a bit more difficult as rendering has progressed, but the other areas of engines have progressed as well.

It's still completely possible for a single person to make a hobby renderer with reasonably modern features, assuming a more limited scope. See Tiny Glade. But a game engine that is useful for lots of different games is a whole lot more than a renderer, and I'm not really sure why NVIDIA would currently want to get into all the rest of that stuff unless that felt blocked by not doing it. I think the current situation where they can just plop some of their rendering tech on top of the various engines suits them very nicely. The fact that there are fewer engines in the wild these days and so they can amortize that integration work a lot more is great for them.
 
Yeah there needs to be a very clear line between neural models trained by developers and packaged with their game versus 3rd party overrides. Neural materials and neural textures seem perfectly harmless as they’re just replacing shader code and traditionally compressed textures. All developer controlled.
Right... I must admit I'm confused as to why people are conflating unrelated discussions here. Did NVIDIA talk about wanting to train these things on the fly on end user machines or something? I didn't see any discussions of that. None of the neural stuff I saw related to rendering would vary on different user's machines... it's just another way to get the same pixels out, potentially a bit faster.

Modding is a separate topic that seems completely unrelated to neural rendering. Sure you could mod neural shaders, just like you mod regular ones... potentially the barrier to entry is higher given the training costs. Why are we discussing this all of a sudden?
 
Right... I must admit I'm confused as to why people are conflating unrelated discussions here. Did NVIDIA talk about wanting to train these things on the fly on end user machines or something? I didn't see any discussions of that. None of the neural stuff I saw related to rendering would vary on different user's machines... it's just another way to get the same pixels out, potentially a bit faster.
With RTX Faces on, each user would see the same thing, but the difference between RTX Faces off and on is drastic, to the extent that it draws into question whether RTX Faces preserves artist/developer intent. What I'd like to know is if developers are able to control the output of RTX Faces or they just have to accept whatever Nvidia's SDK outputs and surrender control over character facial appearance and animation to Nvidia's model.
 
In Oliver and Alex's ces video (and the black state short on the clips channel) you can see x4 FG at like ~30ms latency in Black State. I'm not sure if that's with reflex 2 aswell (thinking it probably was), so lets see if I can phrase this without triggering people, that latency at the smoothness and clarity of 380hz would feel really good from my experience.
I would love to see some A/B blind tests where people play a game on the same hardware at different settings, both graphics settings and frame generation/reflex/ai upscaling, to see what they prefer, and if they could feel or see the negative effects of frame generation. People fixate on the negatives of frame generation, which there are some, for sure, but I really want to know if those same people could identify those negatives in practice. Also, and i think this gets a bit lost in the conversation partly because of how nVidia markets DLSSFG, but the question really shouldn't be "real" 120fps vs 120fps using frame generation. It should be whatever you can achieve without frame generation, vs what you can achieve with it. If you can hit 60fps native and 120/240 with 2x/4x frame gen, the question for the feature should be what is better, having the feature on, or off. Frame generation needs it's Windows Mojave moment, for science, and also to satisfy my own curiosity.
 
but I really want to know if those same people could identify those negatives in practice
I'm pretty confident most gamers heavily overestimate their sensitivity to input lag in general, or rather, underestimate their tolerance for it.

Arguing over like 5ms or 10ms here and there when games can vary wildly from like 40ms to 90+ms without most anybody really noticing just seems a bit silly. I'm sure it matters to the more hardcore competitive types, but let's be real here - that's not you. Even if you like competitive games, 5-10ms isn't changing your game. It's not gonna make up for your lack of map or strategy knowledge, nor will it help you beat people who sit and play these games for 8+ hours a day.
 
I'm pretty confident most gamers heavily overestimate their sensitivity to input lag in general, or rather, underestimate their tolerance for it.

Arguing over like 5ms or 10ms here and there when games can vary wildly from like 40ms to 90+ms without most anybody really noticing just seems a bit silly. I'm sure it matters to the more hardcore competitive types, but let's be real here - that's not you. Even if you like competitive games, 5-10ms isn't changing your game. It's not gonna make up for your lack of map or strategy knowledge, nor will it help you beat people who sit and play these games for 8+ hours a day.
That's exactly right. Most people don't even notice the slightly more input lag. In Singleplayer games, a certain amount of input lag is perfectly acceptable. In fact, I, who play multiplayer games on a console using 60 FPS, occasionally switch between the two game modes of my Miniled TV, which vary between 15ms and 40ms. The thing is, the difference is barely noticeable when using a joypad. I suspect that 90% of people don't even know what input lag is until they read about it, they're just happy with the framerate increase provided by frame generation techniques and play happily.

I think frame generation will soon be a standard in all games. I wouldn't be surprised if MS included it as an available option even in DirectX and next-gen consoles use this technique as standard.
 
I would love to see some A/B blind tests where people play a game on the same hardware at different settings, both graphics settings and frame generation/reflex/ai upscaling, to see what they prefer, and if they could feel or see the negative effects of frame generation. People fixate on the negatives of frame generation, which there are some, for sure, but I really want to know if those same people could identify those negatives in practice. Also, and i think this gets a bit lost in the conversation partly because of how nVidia markets DLSSFG, but the question really shouldn't be "real" 120fps vs 120fps using frame generation. It should be whatever you can achieve without frame generation, vs what you can achieve with it. If you can hit 60fps native and 120/240 with 2x/4x frame gen, the question for the feature should be what is better, having the feature on, or off. Frame generation needs it's Windows Mojave moment, for science, and also to satisfy my own curiosity.
I actually setup a friend last night kind of by accident. He's been strongly against frame gen banging on about latency, artifacts and how he would never use it (still rockin his 1080ti). He popped in last night when I was playing with some cyberpunk mods, I had afterburner on to see cpu load but had the fps counter on as I don't change rivatuner stats overlay per game. Because the overlay was only reporting the base framerate as it does he didn't know framegen was on and figured it was just 60fps. Long story short he was playing it mouse and keyboard for about 30 mins with frame gen on and talking about how smooth it felt for 60fps and the minor artifacts he did notice were actually dlss ones (can't wait for new dlss to fix some of this). I asked about input lag and he replied feels fine why?

When I said you've been playing with frame gen on he didn't believe me lol, I said your the only one whos been infront of that keyboard for the last 30 mins goto options and scroll down and look at the frame gen setting. He turned it off and then was saying this feels worse so this is native ~60 and I was yep. He wouldn't concede more than saying well it works well enough to be usable in that game at least.

Not real sure how to market it though, it's very much like vrr in that respect. Really hard to show through any way other than hands on.
 
Not real sure how to market it though, it's very much like vrr in that respect. Really hard to show through any way other than hands on.

It will probably be similar to upscaling. It will be widely adopted in games, lots of people will try it and like it and there will be vocal naysayers who won’t be representative of the broader community. But that assumes framegen receives positive reporting like the more recent versions of upscaling. If DF shows framegen is a stuttering artifacting mess people will be turned off even if they can’t really detect the problems on their own.
 
@GhostofWar my framegen journey was pretty similar. I didn't think I would like it, but they updated Forspoken to have FSR framegen and apparently they updated the demo as well. So I wanted to check it out, and it's free, so I downloaded the demo and... I could never get it to launch. I'm sure lots of you have experienced the thing where you want something more because it's just out of reach. I fought with that in my spare time for about a month, it just wouldn't launch. Steam would pop up the "launching" windows, then a black window would appear and disappear, and the game is closed. But about that time the DLSS3 to FRS3 framegen mod came out for Cyberpunk, so I downloaded that, and WOW. Totally changed my perspective. I was running CP with path tracing, modded for less bounces because I have a 2070 Super, with aggressive upscaling and some other settings fairly low because I liked the way the lighting looked with path tracing over other effects, getting 40ish fps. And all the videos I watched, people are saying you have to hit 60 for this to feel good, so I'm assuming I'm going to have to change some settings but nope, it just makes the game I had been playing with a fairly low framerate to something more in line with what I want to get. Cyberpunk was never the most responsive game, so I assumed any added latency would feel really bad, but instead, the motion smoothness made it feel better.

And then @Cyan starts posting his experiences with Lossless Scaling, and I had already watched some youtube videos about it, and thought it looked a bit hacky, but he's on here saying how great it is. And I have an entry level gaming laptop with a 1660ti and 120hz screen in it that I don't really game on, except when I'm out of town or whatever, and Lossless Scaling breaths new life into that. And now, I'm a believer. That laptop wasn't hitting 120fps in any modern game, but I could do 60 in some at last gen settings. Now I can get the motion smoothness of my monitor's refresh rate in almost any game. Hell, with Lossless's newest update, I think I can say that I can hit 120hz in every game, though I don't know how playable that would be.

I played a couple hours of Black Ops 6's campaign with FSR3 FG on, 60ish fps boosted to 100-120 found it to be responsive and much better than with FG off. I played Almost 200 hours of Stalker 2 with FSR3 FG on, 40-60fps boosted to 80-100 (and sometimes 4fps, game still has some optimization to go) and it feels so much better with it on. I can't believe how well it works, how little the artifacts are noticeable, and how insignificant the input latency is, because I would describe myself as a person who would notice these things. But I've found that I don't. So that either means I'm not a person who notices these things, or frame generation is good, or maybe both.
 
I have a question with a seemingly obvious answer but I wanna make sure I'm understanding this. With Reflex 2, it reduces input latency when you're moving the mouse and looking around. But it wouldn't make any difference when you click to shoot or interact with something, right?
 
I have a question with a seemingly obvious answer but I wanna make sure I'm understanding this. With Reflex 2, it reduces input latency when you're moving the mouse and looking around. But it wouldn't make any difference when you click to shoot or interact with something, right?

Nope. I think it just samples the mouse/camera control late to warp the image to the new camera position before display.
 
I actually setup a friend last night kind of by accident. He's been strongly against frame gen banging on about latency, artifacts and how he would never use it (still rockin his 1080ti). He popped in last night when I was playing with some cyberpunk mods, I had afterburner on to see cpu load but had the fps counter on as I don't change rivatuner stats overlay per game. Because the overlay was only reporting the base framerate as it does he didn't know framegen was on and figured it was just 60fps. Long story short he was playing it mouse and keyboard for about 30 mins with frame gen on and talking about how smooth it felt for 60fps and the minor artifacts he did notice were actually dlss ones (can't wait for new dlss to fix some of this). I asked about input lag and he replied feels fine why?

When I said you've been playing with frame gen on he didn't believe me lol, I said your the only one whos been infront of that keyboard for the last 30 mins goto options and scroll down and look at the frame gen setting. He turned it off and then was saying this feels worse so this is native ~60 and I was yep. He wouldn't concede more than saying well it works well enough to be usable in that game at least.

Not real sure how to market it though, it's very much like vrr in that respect. Really hard to show through any way other than hands on.
these technologies are going to be so important in the future, for TVs and so on, once we see them implemented on consoles. Motion clarity for the win!

Of course it'd be great if hardware was powerful enough to run games at 360fps or 480fps natively, but seeing my GPU&CPU suffer and heating is too much to handle.

Not to mention the electricity bill, which you have to pay too.

Now that DLSS4 and Lossless Scaling are a thing, I rather prefer to play at 360, 480, or 540fps than at 4K -it's ok but not that great-.

The motion clarity is more important than 4K. And I have my 4K TV, I've done my duty. Yet I play more on my 1440p 165Hz display.... :)

So 360Hz or more and 1080p native on a 24, 25" is going to be the next step for me. Great for my electricity bill -my 4K TV is a power hog and the computer could run more leisurely-. too.
 
Last edited:
Adding AI to the pipeline in the form of RTX Faces and similar generative tech doesn't make games easier to mod. It just takes control away from the artists/developers and gives it to a black box pre-trained ML model. DLSS enhances the input provided by the developer-coded traditional pipeline. Neural Materials is trained on developer-provided ground truth. But RTX Faces and other generative tech replace the artist/developer's vision with its own generated visuals, or at least that's what it appears to do. Hopefully RTX Faces allows developers to fine-tune the output somehow.

We might see a neural rendering equivalent to ReShade/SpecialK that allows gamers to neurally mod their game, or neural filters that don't require the game to be modified. But that's a separate issue than generative techniques in the rendering pipeline taking control away from developers.

Just to clarify here I'm really asking this from an abtract future looking perspective then based on anything really revealed recently or in the pipeline. I understand putting this here suddenly might give the wrong impression.

So it's more along the lines of your second paragraph which outside of the technical side I find interesting from I guess the "ethical?" (this isn't really the right term here) perspective of what direction we would go with here in the future. In terms of more of an abstract should some future iteration of RTX Faces be limited to the developers vision? Or should Nvidia (this really IHV neutral, but let's just stick with them for now) have agency here? Should the end user have agency? I would wonder if say Nvidia launched something that say allowed you to "beautify" (take that however you want) characters regardless of the developers vision what the reception should be?
 
So it's more along the lines of your second paragraph which outside of the technical side I find interesting from I guess the "ethical?" (this isn't really the right term here) perspective of what direction we would go with here in the future. In terms of more of an abstract should some future iteration of RTX Faces be limited to the developers vision? Or should Nvidia (this really IHV neutral, but let's just stick with them for now) have agency here? Should the end user have agency? I would wonder if say Nvidia launched something that say allowed you to "beautify" (take that however you want) characters regardless of the developers vision what the reception should be?

On the surface it’s not so different to downloading a texture or character overhaul from nexus mods. Granted not all games are easily moddable this way so most developers aren’t dealing with people butchering their artwork.

The real difference is the “randomness” factor of neural models. If it produces consistently good results I can see lots of gamers going for it especially since character models are still pretty atrocious especially in open world games.
 
Just to clarify here I'm really asking this from an abtract future looking perspective then based on anything really revealed recently or in the pipeline. I understand putting this here suddenly might give the wrong impression.

So it's more along the lines of your second paragraph which outside of the technical side I find interesting from I guess the "ethical?" (this isn't really the right term here) perspective of what direction we would go with here in the future. In terms of more of an abstract should some future iteration of RTX Faces be limited to the developers vision? Or should Nvidia (this really IHV neutral, but let's just stick with them for now) have agency here? Should the end user have agency? I would wonder if say Nvidia launched something that say allowed you to "beautify" (take that however you want) characters regardless of the developers vision what the reception should be?
I don't think end users changing the look of a single-player game with some Nvidia AI program or ReShade or SpecialK or some mod they downloaded or by personally rewriting every shader by hand is problematic at all.

I do think character face appearance and animations in an unmodified game being generated by an Nvidia neural model instead of the developers is potentially problematic. People are already complaining that Unreal Engine is homogenizing games and making them look and feel the same. RTX Faces could be far more homogenizing. Developers using Unreal Engine can always choose to tweak settings instead of leaving them as default, avoid excessive usage of common marketplace assets, or modify the UE source code to make their game look and feel unique. With RTX Faces, developer ability to customize the output to provide a unique look and feel could be more limited.
 
Reminder this thread is for discussing Digital Foundry articles. That ML video posted months ago was talked about months ago in the AI/ML tech thread.


I'll move these latest posts over to that thread.
 
Last edited:
Back
Top