Exactly this. There's a big difference between using AI to achieve better efficiency/performance in hitting a predetermined goal, and using AI to straight up generate the entire asset type all on its own based on its guessing of context and whatnot. This is just asking for a bunch of incoherent garbage in actual practice. And just think about how finicky generative AI can be, often creating fairly drastically different results based on only small input differences. There's no real consistency to it that could be applied with the near perfect reliability that game visuals will require, especially in a dynamic and endlessly shifting 3d environment.@Remij The thing is a painter is a human being with intention, so they paint things they way they want them to look. Generative ai is like you asking a painter to make a painting for you. I want the games to be made by painters, not people commissioning painters, if that makes sense. I think dlss, neural radiance caching, neural materials and neural compression look good because they're being trained specifically to a ground truth of sorts, and they have a limited context, and I'm not sure they totally fit in the generative ai category. So any artist can design a complex material with many layers and then it runs in some efficient model that behaves how they'd expect it to. That's a lot different to me than RTX faces which kind of just tries to make a better face, or this water simulation stuff above that looks objectively worse than running a physics simulator which games can already do.
Reality is overtaking you:If we can get this right to where it becomes standard practice, then maybe we start thinking about letting AI off the leash and straight up just generating the visuals itself, on the fly during gameplay.
"How many shells do I have? 16. No 26. No 25, or is that 26? Whatever, let's take this dude on!!"
View attachment 12919
At 30 seconds, random dead body appears on the floor.
At 1:20, a random enemy just materialises.
Show me a generative AI that can actually preserve full continuity without random blobbing of content and I'll start to believe it might be able to replace image generation methods. Until then, it's such a terrible compromise, requiring more power to produce less quality results. Every improvement in ML image generation I've seen dials up the art but doesn't tackle the fundamental limitation that there's no actual 'ground truth' being drawn.
I will never understand this sentiment. What is the point of art if it’s entirely detached from humanity?I do not care who/what did my graphics as long as the graphics do the job for me.
I agree with all of this. Beyond being creatively bankrupt it also looks weird. It’s a fun experiment but this isn’t art."How many shells do I have? 16. No 26. No 25, or is that 26? Whatever, let's take this dude on!!"
View attachment 12919
At 30 seconds, random dead body appears on the floor.
At 1:20, a random enemy just materialises.
Show me a generative AI that can actually preserve full continuity without random blobbing of content and I'll start to believe it might be able to replace image generation methods. Until then, it's such a terrible compromise, requiring more power to produce less quality results. Every improvement in ML image generation I've seen dials up the art but doesn't tackle the fundamental limitation that there's no actual 'ground truth' being drawn.
We don’t need to take everything and drag it to the extremes here.I will never understand this sentiment. What is the point of art if it’s entirely detached from humanity?
This isn’t an extreme? It’s a video of AI completely hallucinating an entire game. Baking assets is still humans creating art, prompting an AI to do so is not. What’s the point of all this performance if we’re just going to run slop on it?We don’t need to take everything and drag it to the extremes here.
We baked a lot of assets and lighting back in the day to get good looking graphics while being able to largely preserve performance over a wide range of various GPUS. But to be clear these are just approximations of what is happening.
I agree. But using a hallucinated and clearly broken game as reference for "AI is the inevitable future" doesn't particularly support the argument for ML in games, where it'll be applied in completely different ways with more obvious benefits. I don't know what the specs are for a 'single TPU' but that's what it took to produce this DOOM video at 20fps. What was the equivalent amount of silicon needed in the early 90s to achieve the same (only a a perfect rendition without all the shortcomings)? ML generated games use more resources to achieve less. Unless to can be shown it'll scale up, it seems a dead end for research beyond thought experiments. That's in stark contrast to things like ML upscaling and frame generation or image enhancements which achieve more with less than other methods.You don’t have to hallucinate entire games, it’s a nice exercise in theory, and perhaps there are practical applications down the line. But today, using them to drive performance gains where brute force is having issues, is ideal.
This isn’t an extreme? It’s a video of AI completely hallucinating an entire game. Baking assets is still humans creating art, prompting an AI to do so is not. What’s the point of all this performance if we’re just going to run slop on it?
ML generated games use more resources to achieve less. Unless to can be shown it'll scale up, it seems a dead end for research beyond thought experiments. That's in stark contrast to things like ML upscaling and frame generation or image enhancements which achieve more with less than other methods.