Realtime AI/ML game technologies *spawn

well, I guess we could be there in 10ish years time or so.
At this point I think the debate is if these AI models can extend to the intended outcomes or if they are fundamentally limited and will never achieve correctness, requiring AI to be thought of applied differently. To date, all showcases (I've seen) have shown 'more of the same', so extending the quantity of content but not really the quality. I'd rather see one video without the blobbing and hallucinations and AI-isms than dozens more dream-like takes on different games. One thing severely lacking so far is interaction. Recreated games are largely videos of someone traversing an environment. They also completely lack HUDs! ;)
 
Tom Peterson from Intel is hinting they are developing a technology that uses AI to reduce frame latency. He also hints strongly at using AI to render the entire game from a series of rendered frames.
 
At this point I think the debate is if these AI models can extend to the intended outcomes or if they are fundamentally limited and will never achieve correctness, requiring AI to be thought of applied differently. To date, all showcases (I've seen) have shown 'more of the same', so extending the quantity of content but not really the quality. I'd rather see one video without the blobbing and hallucinations and AI-isms than dozens more dream-like takes on different games. One thing severely lacking so far is interaction. Recreated games are largely videos of someone traversing an environment. They also completely lack HUDs! ;)
I think AI will be an excellent complement to standard-built games. I think I'm not wrong if I state that it will be an essential component of games in the near future and it will stay like that. But I don't think AI alone will be sufficient to create a base game ever: it'll handle specific aspects in engine-built games, such as interactions, procedurally generated stuff and even graphical features such as volumetric simulations, final improvement of IQ and framerate, etc.

EDIT: I'm talking about realtime AI generated video, not about AI generating 3d contents that one can later access, use or play, which can definitely be a thing, as well.
 
Nvidia released a paper on an alternative to Gaussian splatting that leverages the HWRT capability of existing GPUs.


It's interesting that HWRT capability can still be useful even in a rendering paradigm that isn't based on triangle meshes.
oh man, nVidia are the best and will ever be for a long while. Not an nVidia owner but I can't help but admire how many years of advantage they have over the competition.
 
Not really convinced by the new transformer model for DLSS. 4 times the cost versus the CNN model? Don't transformer models tend to "lose focus"?

Also that would mean something like 2 milliseconds on a 4090, which is acceptable, but on a 4060? I guess we'll see.
 
The first application of in game AI enhancement: RTX Neural Faces, explained below ...

Kinda falls down in the intro when they claim the tech generates photorealistic faces and yet I'm clearly looking at a computer game. ;) It does point to how ML can be used alongside traditional rendering and does improve the base version, particularly in subtle mouth detail, but then their examples are also full of wonk as keeps happening with generative AI. The multi-face examples are full of instabilities particularly on the teeth, and presumably this is a current-best-case showing.
 
Not really convinced by the new transformer model for DLSS.

All the modern video gen models are transformer based AFAIK and especially the Chinese ones are capable of some amazing realism in detail (it just breaks down in composition and behaviour, but that's not so important for interpolation/extrapolation).

The solution space in AI is so scarily huge and training so expensive, that as soon as people find something which works they tend to just want to iterate on it (and of course in NVIDIA's case it has to leverage the tensor cores too). That's a large part why CNN lasted so long, but I don't think it will stand the test of time.
 
This guy did a video a few months ago about AI being used like RTX faces, he actually shows it for other effects aswell. I'm kind of more curious to see it used for volumetrics than faces but we can have both ;)


He seriously needs to release that as a mod for Starfield! It would vastly elevate that games immersiveness IMO.
 
This guy did a video a few months ago about AI being used like RTX faces, he actually shows it for other effects aswell. I'm kind of more curious to see it used for volumetrics than faces but we can have both ;)

For the fluid/particle simulation stuff, it heresy to call these filters a physics simulation of any kind. It is the opposite of a physics simulation. 😆

While it does look amazing, it is unrelated to physics. Physics is 100% rules and this has no rules.
 
For the fluid/particle simulation stuff, it heresy to call these filters a physics simulation of any kind. It is the opposite of a physics simulation. 😆

While it does look amazing, it is unrelated to physics. Physics is 100% rules and this has no rules.

The water in the glasses looked like shit, and so did pouring. At times it looked like it was showing two different levels of water in the glass, and when he was pouring the water didn't level out to the rim of the glass. The level in the glass didn't make sense. And you could see the water wasn't really into the glass that was filling etc etc. It honestly looks way shittier than just some basic animation. It has elements of realism, but it's awful.
 
The water in the glasses looked like shit, and so did pouring. At times it looked like it was showing two different levels of water in the glass, and when he was pouring the water didn't level out to the rim of the glass. The level in the glass didn't make sense. And you could see the water wasn't really into the glass that was filling etc etc. It honestly looks way shittier than just some basic animation. It has elements of realism, but it's awful.
Yea I guess "amazing" is not the correct assessment. I was thinking about the flaming hand which did look cool IMO. The water cups were goofy as hell.
 
Yea I guess "amazing" is not the correct assessment. I was thinking about the flaming hand which did look cool IMO. The water cups were goofy as hell.

Yah, it's really cool that you can add fire to a video in real-time like that. The dryer ball stuff looked alright for just messing around. It's not nearly vfx quality, but maybe there are better ai models. My question is why though? For film and video you have things like embergen that give artists a lot of control, and it's a real simulation so it will have all of the desired effects.


For games, I just don't think what was presented actually looks better than what you can do in-game. The Starfield npcs look pretty shitty by default, so it's easy to improve on them, but the face it used as replacement did not match the rest of the scene at all. Just doing scans of people and then lighting them correctly will look better. If you have a ton of NPCs and want them to look different without scanning hundreds of people, then sure, maybe there's some ai thingy you could do. But I think we're a long way off. Nvidia's RTX faces looked terrible. Maybe they'll blow me away in a year or two, but for now what's shown isn't impressive.
 
There's a video up for Patreon supporters where Alex did an interview with Bryan Catanzaro about DLSS4 and AI in gaming and it's predicably great! Bryan said a couple things which immediately clicked with me when speaking about real-time graphics and the potential future of graphics with neural rendering. First he said that he thinks about real-time graphics as 3 dimensions, each being a pillar: Smoothness(framerate), Image Quality(resolution), and Responsiveness(frametime).. and DLSS' goal is to improve all of those things. Which I think is the right way to look at it and how we need to think about things going forward. Each game's requirements will be different, but it's essentially giving developers more tools in the tool box to reach the goals for their games.

The other thing he said was regarding rendering and the future potential of neural rendering.. paraphrasing here "In 3D graphics, everything is approximated on a surface level. We're not simulating rays actually entering inside models and bouncing around, it's a 2D approximation.. which is fine enough for opaque objects, but not good enough for other semi-translucent objects/materials.", and he says that neural rendering could basically take it a large step further due to being trained on far more real-world data which could never run in real-time, for a much more accurate representation. He then gives an analogy such as "When a painter paints a scene, they're not simulating the lighting, they're not passing photons through the geometry of their painting.. they just simply know what it's supposed to look like." And when you think about it.. that's exactly right. A painter, through experience, study, and practice can paint an image which accurately reflects observed reality, despite them not being able to truly simulate it. And that's basically what neural rendering allows them to do on a much much deeper level in 3D graphics.

It's going to get a lot better and we're just scratching the surface on this stuff.
 
@Remij The thing is a painter is a human being with intention, so they paint things they way they want them to look. Generative ai is like you asking a painter to make a painting for you. I want the games to be made by painters, not people commissioning painters, if that makes sense. I think dlss, neural radiance caching, neural materials and neural compression look good because they're being trained specifically to a ground truth of sorts, and they have a limited context, and I'm not sure they totally fit in the generative ai category. So any artist can design a complex material with many layers and then it runs in some efficient model that behaves how they'd expect it to. That's a lot different to me than RTX faces which kind of just tries to make a better face, or this water simulation stuff above that looks objectively worse than running a physics simulator which games can already do.
 
This shows what's currently state of the art in terms of offline ML application: https://bertrand-benoit.com/blog/ai-finishing/?lid=nsim3dtmd7qs


As I understand it, this source render:
1737228945594.png

...is turned into this image:

1737228966228.png

This one impresses where it takes an already good looking render and makes it more realistic.

Image1.jpg

Image2.jpg

That said, the cakes are actually changed from their intended. These madeleines are much less holey than the ML interpretation that has them more bread-like.

1737229570448.jpeg

The artist matched the source and the ML just ignored it and did its own thing.
 
Last edited:
Back
Top