Realtime AI content generation *spawn

The voice is terrible, 0 stars, the responses aspire to the heights of chatgpt 2, no serious gamedev would use this with a straight face. It's a great example of Nvidia experiencing too much success and starting to throw money into a dumpster fire just like all tech companies that get big enough eventually start doing. (That being said I wouldn't be surprised if Ubisoft makes an "experimental" game with this, because they'll throw money at literally anything new).
For non important/secondary NPCs (pedestrians/common enemies) in open world games, this tech is miles ahead of any voice over trash we have in current open world games, right now the best you can hope for talking to a pedestrian is a couple of soulless voiced over lines that repeat forever. With this tech, you will have vastly more immersive worlds that are more dynamic and responsive. It's still early days for the tech too (it's a demo after all), it's going to vastly improve with time, just like ChatGPT.
 
For non important/secondary NPCs (pedestrians/common enemies) in open world games, this tech is miles ahead of any voice over trash we have in current open world games, right now the best you can hope for talking to a pedestrian is a couple of soulless voiced over lines that repeat forever. With this tech, you will have vastly more immersive worlds that are more dynamic and responsive. It's still early days for the tech too (it's a demo after all), it's going to vastly improve with time, just like ChatGPT.

That doesn't matter, they sound like trash and have nothing to say. Just imagine playing God of War 3 and after some AAA super highly polished cinematic delivered by pro voice actors some random NPC starts spewing garbage at you in a robo voice, you'd assume there's some bizarre bug.

Besides, writing isn't the bottleneck for lines of dialogue in a game currently, older just text based games with a twentieth the budget had vastly more writing that today's games, because the bottleneck is voice acting. Which with offline tools you can do pretty well, Luke Skywalker has had an AI voice in a multi million dollar streaming show already. Some modder managed this by themselves to decent (and vastly better than this Nvidia demo) results; just give AAA budgeted games the same tools and watch your exact dream scenario come to life, but with well written lines and solid voice acting instead of stuttery robo garbage:

 
This is a hilariously bad demo, the kind of thing that never comes out and went out of style as a tech demo over a decade ago after even the common observers started to twig to this stuff not coming out anytime soon. The voice is terrible, 0 stars, the responses aspire to the heights of chatgpt 2, no serious gamedev would use this with a straight face.
I agree, this surely has to be staged or is a joke?
 
I agree, this surely has to be staged or is a joke?
Why would it be staged? All of that is doable, speech-to-text from human input (the lady playing the demo), text-generation with an LLM using her input and then speech synthesis based on the LLM output.

Maybe dual GPU or running the AI models in cloud? I'd guess otherwise you'd see some severe fps dips when GPU is doing AI inferencing.

I think it's a neat tech demo as stuff like this is definitely quite new and different. Not everyone's cup of tea, I'm sure.
 
Why would it be staged? All of that is doable, speech-to-text from human input (the lady playing the demo), text-generation with an LLM using her input and then speech synthesis based on the LLM output.

Maybe dual GPU or running the AI models in cloud? I'd guess otherwise you'd see some severe fps dips when GPU is doing AI inferencing.

I think it's a neat tech demo as stuff like this is definitely quite new and different. Not everyone's cup of tea, I'm sure.

I mean, it's definitely not a joke. Someone worked hard on it, from a purely "this is a tech demo" standpoint it's neat. My only criticism is that they are somehow trying to sell this as a thing developers could use in major games today. Like with all tech demos things tend to take quite a while going from "tech demo" to "something you'd actually want to use in a major product"
 
Just imagine playing God of War 3 and after some AAA super highly polished cinematic delivered by pro voice actors some random NPC starts spewing garbage at you in a robo voice, you'd assume there's some bizarre bug.
Well, the tools are in early stages, later they will add emotions to the AI generated lines, some AI tools can modify songs by imitating famous singers, some AI models are very good as narrators and are used widely in TikTok and facebook reels. Movies and ads are already using AI generated voices with emotions. Things will improve, the demo is just a quick proof of concept.

writing isn't the bottleneck for lines of dialogue in a game currently
Unfortunately, current games are limited by both writing and voice overs. The writing of current games is generally weak even in short games, as studios are dedicating smaller and smaller budgets to writing, amidst the ballooning costs for making games in general.
 
Next gen cpu will have about 45TOPS available for stuff, do you think that it will be enough for this kind of interaction? And if not, what's necessary to play untethered from the cloud?
 
The early days of OpenAI still has too much weight and they are too coy about what they are doing now.

All the x-shot shit is almost certainly a lark at this point, modern chatGPT is more likely a triumph of good old annoted data, many many thousands of man hours of annotation. This is where the future will need to be for roleplaying bots, someone will have to pony up the money for massive amounts of annotated and specially crafted roleplaying data. (ie. dialogue, but including an annotation for context of the scene and character descriptions for all the participants.)
 
Back
Top