Realtime AI/ML game technologies *spawn

@Remij The thing is a painter is a human being with intention, so they paint things they way they want them to look. Generative ai is like you asking a painter to make a painting for you. I want the games to be made by painters, not people commissioning painters, if that makes sense. I think dlss, neural radiance caching, neural materials and neural compression look good because they're being trained specifically to a ground truth of sorts, and they have a limited context, and I'm not sure they totally fit in the generative ai category. So any artist can design a complex material with many layers and then it runs in some efficient model that behaves how they'd expect it to. That's a lot different to me than RTX faces which kind of just tries to make a better face, or this water simulation stuff above that looks objectively worse than running a physics simulator which games can already do.
Exactly this. There's a big difference between using AI to achieve better efficiency/performance in hitting a predetermined goal, and using AI to straight up generate the entire asset type all on its own based on its guessing of context and whatnot. This is just asking for a bunch of incoherent garbage in actual practice. And just think about how finicky generative AI can be, often creating fairly drastically different results based on only small input differences. There's no real consistency to it that could be applied with the near perfect reliability that game visuals will require, especially in a dynamic and endlessly shifting 3d environment.

I think the first step to even thinking about this is gonna be offline generation. Having studios with access to extremely fast/powerful AI hardware that can be rapidly trained to produce the specific sort of models, surfaces, animations and whatnot that are required for a specific game, based on developer samples and whatnot. And results that still need a human hand to go through and pick out which ones really fit and which ones dont, and recalibrating the training based on this. Devs then still manually import these things into the game as they would any hand-made assets.

If we can get this right to where it becomes standard practice, then maybe we start thinking about letting AI off the leash and straight up just generating the visuals itself, on the fly during gameplay.
 
If we can get this right to where it becomes standard practice, then maybe we start thinking about letting AI off the leash and straight up just generating the visuals itself, on the fly during gameplay.
Reality is overtaking you:

You will have a hard time as gamer if you oppose A.I. from now on.
Same with the people opposing RT, upscaling, FG etc.

I do not care who/what did my graphics as long as the graphics do the job for me.
 
Last edited by a moderator:
"How many shells do I have? 16. No 26. No 25, or is that 26? Whatever, let's take this dude on!!"

1737640960301.png
At 30 seconds, random dead body appears on the floor.

At 1:20, a random enemy just materialises.

Show me a generative AI that can actually preserve full continuity without random blobbing of content and I'll start to believe it might be able to replace image generation methods. Until then, it's such a terrible compromise, requiring more power to produce less quality results. Every improvement in ML image generation I've seen dials up the art but doesn't tackle the fundamental limitation that there's no actual 'ground truth' being drawn.
 
Last edited:
"How many shells do I have? 16. No 26. No 25, or is that 26? Whatever, let's take this dude on!!"

View attachment 12919
At 30 seconds, random dead body appears on the floor.

At 1:20, a random enemy just materialises.

Show me a generative AI that can actually preserve full continuity without random blobbing of content and I'll start to believe it might be able to replace image generation methods. Until then, it's such a terrible compromise, requiring more power to produce less quality results. Every improvement in ML image generation I've seen dials up the art but doesn't tackle the fundamental limitation that there's no actual 'ground truth' being drawn.

Mod edit: Non-technical rhetoric removed.

It is coming and sooner than most people think.
 
Last edited by a moderator:
This is a B3D technical forum. Can you please elevate your discussion to a technical look at AI's application instead of just using propaganda-style rhetoric. We want to discuss what ML may or may not bring in what timelines, and not talk about people of differing minds 'crying' etc.

eg. Expound on your 'it's coming' with reference to the work and how it'll manifest. Address the concerns of the sceptics by talking about how the algorithms will change or are changing, and/or address questions about what level of hardware will be needed, how scalable it is, etc.
 
I do not care who/what did my graphics as long as the graphics do the job for me.
I will never understand this sentiment. What is the point of art if it’s entirely detached from humanity?
"How many shells do I have? 16. No 26. No 25, or is that 26? Whatever, let's take this dude on!!"

View attachment 12919
At 30 seconds, random dead body appears on the floor.

At 1:20, a random enemy just materialises.

Show me a generative AI that can actually preserve full continuity without random blobbing of content and I'll start to believe it might be able to replace image generation methods. Until then, it's such a terrible compromise, requiring more power to produce less quality results. Every improvement in ML image generation I've seen dials up the art but doesn't tackle the fundamental limitation that there's no actual 'ground truth' being drawn.
I agree with all of this. Beyond being creatively bankrupt it also looks weird. It’s a fun experiment but this isn’t art.
 
I will never understand this sentiment. What is the point of art if it’s entirely detached from humanity?
We don’t need to take everything and drag it to the extremes here.

We baked a lot of assets and lighting back in the day to get good looking graphics while being able to largely preserve performance over a wide range of various GPUS. But to be clear these are just approximations of what is happening.

And to continue to push that graphical bar forward, we need to once again approximate, but the simulation will be much denser. If we want a revolution in graphics, we have to get a lot denser, and unfortunately that type of power isn’t on hand. So ML fits that slot perfectly, it is an extremely powerful approximation tool that can get you close to nearly 99% with some really complex calculations with a fraction of the power required to do it.

All the other tools, those are still at your disposal. But IMO, if you want to drive fidelity forward, than ML based technologies will begin supplanting what we have today. And that applies to just about everything, not just video games.

You don’t have to hallucinate entire games, it’s a nice exercise in theory, and perhaps there are practical applications down the line. But today, using them to drive performance gains where brute force is having issues, is ideal.
 
We don’t need to take everything and drag it to the extremes here.

We baked a lot of assets and lighting back in the day to get good looking graphics while being able to largely preserve performance over a wide range of various GPUS. But to be clear these are just approximations of what is happening.
This isn’t an extreme? It’s a video of AI completely hallucinating an entire game. Baking assets is still humans creating art, prompting an AI to do so is not. What’s the point of all this performance if we’re just going to run slop on it?

Perhaps I subscribe to a more romantic vision of games as an actual art form but this just seems corrosive, similar to those horrible AI generated pictures and ‘paintings’.
 
To be clear, this isn't really a "should AI happen discussion". This is a technical discussion on an evolving technology.

Regardless about how one thinks about 'art' and human endeavour, etc., the thing that matters here is what appears on screen as a rendition of a game state. The impact of ML belongs at best in the Industry discussion although it's a pretty philosophical discussion.

You don’t have to hallucinate entire games, it’s a nice exercise in theory, and perhaps there are practical applications down the line. But today, using them to drive performance gains where brute force is having issues, is ideal.
I agree. But using a hallucinated and clearly broken game as reference for "AI is the inevitable future" doesn't particularly support the argument for ML in games, where it'll be applied in completely different ways with more obvious benefits. I don't know what the specs are for a 'single TPU' but that's what it took to produce this DOOM video at 20fps. What was the equivalent amount of silicon needed in the early 90s to achieve the same (only a a perfect rendition without all the shortcomings)? ML generated games use more resources to achieve less. Unless to can be shown it'll scale up, it seems a dead end for research beyond thought experiments. That's in stark contrast to things like ML upscaling and frame generation or image enhancements which achieve more with less than other methods.
 
This isn’t an extreme? It’s a video of AI completely hallucinating an entire game. Baking assets is still humans creating art, prompting an AI to do so is not. What’s the point of all this performance if we’re just going to run slop on it?

ML generated games use more resources to achieve less. Unless to can be shown it'll scale up, it seems a dead end for research beyond thought experiments. That's in stark contrast to things like ML upscaling and frame generation or image enhancements which achieve more with less than other methods.

So ML generated games the same amount of resources, regardless of what it’s trying to do. If they could have created a model that created a game that looked like flight simulator 2030, using that same model to recreate doom for instance the resources would be the same.

So what we want to do, is to use it for games in which it’s clear that standard programming would be unable to handle the task.

Where I’m going with this is that there are limitations to memory and bandwidth such that you cannot bake everything. So to have hallucinated assets that are trained so that the closer you get the more detail there is etc, you’re now at the point where you can rely on computational speeds to create that asset with a fixed bandwidth cost, versus having to do a lot of vram shuffling to get assets there just in time. Couple this with the size of video games, if we want to continue with more visual fidelity, our game sizes will just keep ballooning over things that perhaps very few people will actually take the time to look at.

Why not have model that generates the look and feel at super high fidelity instead? It’s a hallucination for sure, but it can still be based off the artists desire for this asset.
 
With the inevitability of quantum computers in the future, AI and ML with evolve beyond our wildest dreams. We might be getting extremely good visuals with AI doing most of the job or who knows, even completely in less than 10 years.
That said this is still very scary. The results might be so good, that we, or at least the newer generations, might be barely prefering or giving any chance to real human creativity. We see a glimpse of that now already with the internet flooding with AI content and people self proclaimed as "AI Artists". In the past when we had less technology we had to put our minds more into action. But from generation to generation there is an increasing perceptual and mental processing gap. Everything we do is becoming more and more streamlined and simplified. Yes a huge portion of my post is not technical, but this is such a serious for me subject that I don't feel like talking about the technical expectations without making any mention about the social, developmental and existential consequences on the human experience and expression.

It is inevitable that tiny inputs might be creating hugely detailed 3D interactive worlds in the future that approach perfection. There is no limit in where technology can and will go.
 
Back
Top