Digital Foundry Article Technical Discussion [2025]

It just takes control away from the artists/developers and gives it to a black box pre-trained ML model.

Yeah there needs to be a very clear line between neural models trained by developers and packaged with their game versus 3rd party overrides. Neural materials and neural textures seem perfectly harmless as they’re just replacing shader code and traditionally compressed textures. All developer controlled.
 
Are you saying there will be better products of the same tech available by other companies?
Huh? I just mean in general, including Nvidia.

We're still years away from all that stuff becoming implemented in games in any kind of common way. Just saying, people still focusing on specs makes sense. Until all these other things actually take root, and become standard practice(which will require support from other hardware as well, consoles in particular), then raw power is still critical.
 
Imagine Nvidia just makes their own graphics engine team to release an open-source graphics engine that companies can use to make their games.
I think us folks at Beyond3D have a natural inclination to vastly overestimate how much of a modern game engine is related to graphics... mainly because that's what we like to think about and that gets the lions share of the marketing. But it has always been a relatively small part of engines; if you recall in the UE3 days it was quite common for licensees to replace all the rendering parts of Unreal themselves. That has obviously become a bit more difficult as rendering has progressed, but the other areas of engines have progressed as well.

It's still completely possible for a single person to make a hobby renderer with reasonably modern features, assuming a more limited scope. See Tiny Glade. But a game engine that is useful for lots of different games is a whole lot more than a renderer, and I'm not really sure why NVIDIA would currently want to get into all the rest of that stuff unless that felt blocked by not doing it. I think the current situation where they can just plop some of their rendering tech on top of the various engines suits them very nicely. The fact that there are fewer engines in the wild these days and so they can amortize that integration work a lot more is great for them.
 
Yeah there needs to be a very clear line between neural models trained by developers and packaged with their game versus 3rd party overrides. Neural materials and neural textures seem perfectly harmless as they’re just replacing shader code and traditionally compressed textures. All developer controlled.
Right... I must admit I'm confused as to why people are conflating unrelated discussions here. Did NVIDIA talk about wanting to train these things on the fly on end user machines or something? I didn't see any discussions of that. None of the neural stuff I saw related to rendering would vary on different user's machines... it's just another way to get the same pixels out, potentially a bit faster.

Modding is a separate topic that seems completely unrelated to neural rendering. Sure you could mod neural shaders, just like you mod regular ones... potentially the barrier to entry is higher given the training costs. Why are we discussing this all of a sudden?
 
Right... I must admit I'm confused as to why people are conflating unrelated discussions here. Did NVIDIA talk about wanting to train these things on the fly on end user machines or something? I didn't see any discussions of that. None of the neural stuff I saw related to rendering would vary on different user's machines... it's just another way to get the same pixels out, potentially a bit faster.
With RTX Faces on, each user would see the same thing, but the difference between RTX Faces off and on is drastic, to the extent that it draws into question whether RTX Faces preserves artist/developer intent. What I'd like to know is if developers are able to control the output of RTX Faces or they just have to accept whatever Nvidia's SDK outputs and surrender control over character facial appearance and animation to Nvidia's model.
 
In Oliver and Alex's ces video (and the black state short on the clips channel) you can see x4 FG at like ~30ms latency in Black State. I'm not sure if that's with reflex 2 aswell (thinking it probably was), so lets see if I can phrase this without triggering people, that latency at the smoothness and clarity of 380hz would feel really good from my experience.
I would love to see some A/B blind tests where people play a game on the same hardware at different settings, both graphics settings and frame generation/reflex/ai upscaling, to see what they prefer, and if they could feel or see the negative effects of frame generation. People fixate on the negatives of frame generation, which there are some, for sure, but I really want to know if those same people could identify those negatives in practice. Also, and i think this gets a bit lost in the conversation partly because of how nVidia markets DLSSFG, but the question really shouldn't be "real" 120fps vs 120fps using frame generation. It should be whatever you can achieve without frame generation, vs what you can achieve with it. If you can hit 60fps native and 120/240 with 2x/4x frame gen, the question for the feature should be what is better, having the feature on, or off. Frame generation needs it's Windows Mojave moment, for science, and also to satisfy my own curiosity.
 
Back
Top