Using AI to improve games

you can build a house, marry, have a child, etc, and NPCs go on their daily routine and errands..., it might work
AI could produce a story for you to listen to or discuss, sure. But it won't produce gameplay that lets you live that experience without systems built into a game (built either by developers or modders).

I get it, 3D Elder Scrolls games are rather empty. I'm sure there's an audience for a game that has unbound amounts of narration but I'm doubtful that this is a majority stance if there's no substantive gameplay to back it up. I'd be happy to be proven wrong. Although I'd prefer companies paid writers instead of paying for a cloud service that's bound to cost absurd amount of money in the future.
 
AI could produce a story for you to listen to or discuss, sure. But it won't produce gameplay that lets you live that experience without systems built into a game (built either by developers or modders).

I get it, 3D Elder Scrolls games are rather empty. I'm sure there's an audience for a game that has unbound amounts of narration but I'm doubtful that this is a majority stance if there's no substantive gameplay to back it up. I'd be happy to be proven wrong. Although I'd prefer companies paid writers instead of paying for a cloud service that's bound to cost absurd amount of money in the future.
your concerns are shared by many people. I sent the video to some friends and a few of them laughed a lot while others shared your view.

The video made me laugh.

I gotta agree 1000% with you on the gameplay issue. That being said, such a vast game would be impossible to fill with interesting NPCs and dialogues past a certain point. You expect a new comment from someone and they you hear the same thing: "I used to be an adventurer like you, then I took an arrow in the knee".

https://en.wikipedia.org/wiki/Arrow_in_the_knee

Sometimes you get the vanilla AI of Skyrim look at you and say "You must leave". Two seconds later: "You must leave!". 1 second later: "You must leave!". Then again a secon later: "You must leave!".

The game's default AI uses a very limited amount of responses and dialogues, it gets to a point where having non repetitive dialogues is impossible.

When I found that video I subscribed to the channel, and started to watch some previous videos of the guy, which are basically monologues of the guy playing a rudimentary guy and use his imagination as best as he can. It's still okay but having the AI interacting with him made the gameplay a bit better.

I mean more interesting. The AI at the service of the gameplay, I prefer to think of that.
 
I think this depends on what kind of games you are making. For some games you want tighter control of how NPC interacts with people. On the other hand, for games such as sand box games it'd be interesting to have NPCs behave more freely. There are already people chatting with GPT-like chatbots, so this is already doable.

However, LLMs are still quite unpredictable, mainly because we still don't really understand how to control them. For now, trying to contain a broader model into doing specific things do not work really well (e.g. making a NPC in Skyrim by making a simple prompt like "You are a grumpy blacksmith living in Skyrim of the Elder Scrolls universe. Talk as one and interact with players as your potential clients."), so maybe the alternative is to have a "basic" LLM with basic understanding of English and the world (which can be hard to define, by the way) and add specific knowledges you want them to have, but that can be much more expensive to train, and it can be unfeasible to let each NPC have their own LLM.

On the size of LLM, there are many smaller language models that work pretty well as a chatbot (not necessarily as a problem solver), such as Microsoft's Phi-3 or Meta's Llama 3.2. They can be as small as 1B paramters that with 4 bits quantization takes only 0.5GB VRAM to run.
 
Re: the concept of generative AI for NPC behaviours, I think the solution is not too complex. You just write a prompt for the LLM that describes the character, describes anything in the environment you might want that character to reference, or events or people etc, and then say that it should steer the conversation towards asking the player to complete a certain task, which you then describe. The character will try to engage the player with their request after a few responses to open-ended inquiries.

Alternatively of course the NPC could discuss topics more broadly, but you can constrain it quite effectively with a prompt to a limited number of topics or subjects that would trigger a quest or similar in-game objective. It would almost certainly be exploitable to some degree of course.

You would not need to train an LLM. You could probably even get a decent result without fine-tuning an LLM (which is very cheap but does involve some training cost and setup). You really could get a decent result out of a complex prompt with any number of modern instruction-tuned LLMs.

I think processing off device would work best for anything that needs to run on a memory constrained platform. The nice thing about this application is that 100-200ms of extra lag will not really matter in the context of a conversation.

Something like GPT4o voice mode would be best for a native audio model, though potentially costly and with fairly strict API conditions I believe.
 
@pcchen @oliemack limiting the AI might be the best solution. I mean, some concepts are too broad and out of context, and those should be totally avoided by the AI. Maybe you could go as far as asking the AI what's beyond the stars, but not things like asking a NPC to imitate some modern or not so modern politicians, celebrities, etc, as hilarious as that might be.

It could be fine for an AI NPC to talk about a book within the lore of Skyrim, those exist -shorter and longer ones-.

It could also be interesting if the AI didn't always react as expected. I mean, if you asked the AI to talk about a certain book in Skyrim, the AI could just reply like a human, "I didn't read that book" or "wish I read more books, but my time is limited, and you only live once", or "I can't stand you pretentious know-it-all", etc.

Also said AI should remember how it reacted before, to avoid inconsistencies. Thus if the player asks the same question later, said AI should reply in a similar manner. Or say something like: "I didn't read that book before but you piqued my interest and I started reading the adventures of (whatever), it's a funny/boring book". Or "thanks to something you told me time ago, I just got into reading and that's one of the books I began to read", etc.
 
Last edited:
You just write a prompt for the LLM that describes the character, describes anything in the environment you might want that character to reference, or events or people etc, and then say that it should steer the conversation towards asking the player to complete a certain task, which you then describe.
Just is doing a lot of heavy lifting. ;) LLMs are pretty bad at producing something sensible this content dependent. It's much easier to get coherent dialogue about "fantasy events in a town ruled by a troll" but the more context you add (which is absolutely needed here) the more lost LLMs get. They draw from previous experiences (ok, stolen experiences ;P) so the moment you hit on something that wasn't in the training dataset, you've got a problem. You may end up with something that work sometimes, but more often than not it will be hilariously out of place or plain wrong.
 
Just is doing a lot of heavy lifting. ;) LLMs are pretty bad at producing something sensible this content dependent. It's much easier to get coherent dialogue about "fantasy events in a town ruled by a troll" but the more context you add (which is absolutely needed here) the more lost LLMs get. They draw from previous experiences (ok, stolen experiences ;P) so the moment you hit on something that wasn't in the training dataset, you've got a problem. You may end up with something that work sometimes, but more often than not it will be hilariously out of place or plain wrong.
LLMs are quite controllable these days and I'm sure you could create something pretty compelling. On a GPT-3 level LLM probably not, but on something more modern and heavily instruction tuned I think you could create a good enough result.

I've certainly played around with this many times over the past 3 years or so and had some interesting results. In the context of a game that players are actively trying to break and incentivized to mess with, maybe it's not good enough, but there are interactive applications where it certainly is good enough.
 
There usage issues with LLMs unfortunately. Developers are going to eat significant costs firing to their LLM for all of their responses and then there is the localization issue. Chatgpt and other LLMs largely are just English based.

While it would be possible to use LLMs, they aren't effective for run-time usage. I would use them to create the dialogue and story, perhaps flesh out the dialogue with player choices etc, but we can't use them in a run-time manner.
 
There usage issues with LLMs unfortunately. Developers are going to eat significant costs firing to their LLM for all of their responses and then there is the localization issue. Chatgpt and other LLMs largely are just English based.

While it would be possible to use LLMs, they aren't effective for run-time usage. I would use them to create the dialogue and story, perhaps flesh out the dialogue with player choices etc, but we can't use them in a run-time manner.

ChatGPT already speaks many languages. Smaller models such as Llama 3.2 officially supports 8 languages and speaks more. So I think language is not a huge problem here.
I think the bigger problem is QA. For a user mod it's fine if the NPC speaks something unexpected. People would just laugh and even find it fascinating. However, a game from a large publisher does not have such luxury, and it's impossible to test all possible scenarios when you have a LLM with unpredicable outputs. I believe there's been brief discussions about this in another thread.

So I agree that initially the LLM is most likely to be used to generate slightly different responses quickly, and it's likely that we'll need to use voice synthesizers to voice all of them (and this can be seriously problematic to voice actors and actresses, but that's another topic). But obviously it's going to take some time.
 
I guess the best solution is to limit what the player can say/do. Already these RPGs have finite options - good, bad, indifferent, etc. Rather than give the players free text entry and require a NN to process that, player choices could be generated within the rules of the game universe. These options could be adapted, Peter Molyneux style, towards 'good' and 'evil' (or factions, etc) as the player makes choices, so providing a novel experience, but also constraining everything to a language set that's workable. Also it'd work on console without needing speech-to-text responses.

In short, using AI in a Fable style diverging-player-path as opposed to completely open-ended would eliminate a lot of potential headaches without drastically compromising the principles of player freedom to play a role.
 
I guess the best solution is to limit what the player can say/do. Already these RPGs have finite options - good, bad, indifferent, etc. Rather than give the players free text entry and require a NN to process that, player choices could be generated within the rules of the game universe. These options could be adapted, Peter Molyneux style, towards 'good' and 'evil' (or factions, etc) as the player makes choices, so providing a novel experience, but also constraining everything to a language set that's workable. Also it'd work on console without needing speech-to-text responses.

In short, using AI in a Fable style diverging-player-path as opposed to completely open-ended would eliminate a lot of potential headaches without drastically compromising the principles of player freedom to play a role.
there is a potential in AI for that. Every single game is limited to what the developers create for you. They craft missions, secondary missions, procedural generated stuff, etc etc, you are in their universe. The game itself won't be generating any more content than that.

With AI however either the player or the AI can even dictate new missions or situations. I.e. the guy of the video I shared has persuaded the AI into creating a mission -which the game didn't activate as an actual mission since that's impossible- where he is a secret agent working for the Imperials and also he became a secret agent working for the Stormcloaks. He is a counter-spy double secret agent.

Both factions of the AI gave him a code name and the AI is acting accordingly by the rules of that "mission", which is reflected in the dialogues.
 
Last edited:
Back
Top