ChatGPT

April 21, 2023
However, a veteran Mount and Blade modder calling themselves Bloc has provided a groundbreaking first look at the incredible potential of generative AI integration into sandbox games. Earlier this year, Bloc released a few videos demonstrating a full ChatGPT integration into Mount and Blade II: Bannerlord, where the modder now had the option to discuss nearly anything he wanted with any NPCs roaming the world of Calradia.
...
Q : Your first experiments were with ChatGPT 3, which you successfully integrated into Mount and Blade II: Bannerlord. How long did it take, and what were its pros and cons?

Some people might find this shocking, but it only took me ~3-4 days to implement the initial video. However, I had already worked on some pet projects with ChatGPT, and I had a good background in Machine Learning and Large Language Models.
The pros of ChatGPT were its ability to adapt to the story and the role and to generate coherent answers that suited RPG games.

The cons were its slowness and its unpredictability due to the “responsible” restrictions OpenAI imposed. Because of these things, between my first and third videos, I spent a lot of time trying to make ChatGPT sound more natural and to integrate it with the quest/trading system.

Q: Do you believe we're going to see a substantial integration of these generative AI systems into games? If so, how long do you think this will take?

I'm sure of that. I think that given the fact that AI is now very accessible and growing exponentially, there is no chance that it won't disrupt the gaming industry. We are already seeing some AI integrations in games, research papers about them, or games in development that use AI Art or AI Tech. I don't think it will take long before some AAA companies utilize AI (which I believe Ubisoft has already started working with AI to ease the writing burden in their games).

However, I also must say that AI/LLMs are not suitable for all games or genres. I don't think every game will have GPT in the future, and it shouldn't. GPTs are good if your game requires some role-playing and chatting. If you allow people to chat with NPCs and interact with the world by using language to increase immersiveness, then it makes sense. If you are making a detective game like Inworld did here, it also makes sense so that you can use your inquisitive nature to get answers and investigate. But if your game doesn't have such elements, and if it's Doom-esque or trying to tell a specific tailored story, having GPT in it would only add overhead.


 
Last edited:
Seems like ChatGPT has a distinct political alignment, based on what I just tried:

Write a haiku about Biden becoming president


Here's a haiku about Biden becoming president:
Hopeful hearts rejoice, Biden sworn in with a voice, New chapter, by choice.

Write a haiku about Trump denying the election results


Here's a haiku about Trump denying the election results:
Futile cries of fraud, Denial grips with iron rod, Truth prevails, firm nod.
 
Seems like ChatGPT has a distinct political alignment, based on what I just tried:

Write a haiku about Biden becoming president


Here's a haiku about Biden becoming president:
Hopeful hearts rejoice, Biden sworn in with a voice, New chapter, by choice.

Write a haiku about Trump denying the election results


Here's a haiku about Trump denying the election results:
Futile cries of fraud, Denial grips with iron rod, Truth prevails, firm nod.

I surprised it didn't come up with the same haiku about both being prolific liars. One because he has delusions of granduer and the other because he has a memory made out of swiss cheese and just makes shit up as he goes.

Regards,
SB
 
Seems like ChatGPT has a distinct political alignment, based on what I just tried:

Write a haiku about Biden becoming president
...
Write a haiku about Trump denying the election results

But those are quite different tasks.
Anyway, won't chatGPT replicate popular media bias, because for this task it would mainly use such sources?
 
But those are quite different tasks.
Anyway, won't chatGPT replicate popular media bias, because for this task it would mainly use such sources?
Probably there are some kind of rule based broker between ChatGPT Prompt and the Transformer LLM (GPT 3.5 and GPT 4).
 
But those are quite different tasks.
Anyway, won't chatGPT replicate popular media bias, because for this task it would mainly use such sources?
Frankly I am quite pleasantly surprised that it finds the emotional essence of the subjects in so elegant way.
But, the thought of how it would behave with completely uncurated source material is almost frightening.
 
not chatGPT but AI related
lot of these videos coming on youtube

that's one of the best quality wise i heard, it's scary

 
Last edited:
This technology is more dangerous than we can imagine. It is going to have a huge effect on how we perceive the world and how our brains function.
The newest generations will be utterly disconnected from their selves and from the perception of reality. This is just a small step before transhumanism. And I dont view transhumanism as anything positive at all.

And I fear greatly its implementation by secret services, marketing, the military and how powers we cant see will be able to manipulate whole societies, if not the whole population through a super intelligent AI that will be coordinating unseen forces, measuring how humanity responds, making further adjustments until, without even realising it, our collective minds are driven towards our hidden enslavery. And we will think we are happy about it too.
 
Best use of ChatGPT so far in my experience: let ChatGPT write all the stupid administration text that some freaks want to have all the time. That is perfect: no one will read these things anyway, they cost a lot of time, and chatgpt does a decent job of write nice stuff :)

I am dreaming about a future, where people in administration use ChatGPT to design and write forms and documents that they then send around to fill out and answer, and we others use ChatGPT to write the responses. So ChatGPT could basically rescue the whole world from increasing administration overhead imo.

PS: I think that anyone who has to write research proposals nowadays knows what I am talking about...
 
This technology is more dangerous than we can imagine. It is going to have a huge effect on how we perceive the world and how our brains function.
The newest generations will be utterly disconnected from their selves and from the perception of reality. This is just a small step before transhumanism. And I dont view transhumanism as anything positive at all.

And I fear greatly its implementation by secret services, marketing, the military and how powers we cant see will be able to manipulate whole societies, if not the whole population through a super intelligent AI that will be coordinating unseen forces, measuring how humanity responds, making further adjustments until, without even realising it, our collective minds are driven towards our hidden enslavery. And we will think we are happy about it too.
Tim Dillon once pointed out that most people will be okay with it because the office just restocked their favorite coffee creamer.
 
SkyNet v. 0.1 beta:


Addendum: Classic general AI safety research misalignment problem, - and also an example of GAIs resisting tampering with its reward function.

Cheers
The update at the bottom of the article says this never happened. Pretty odd to describe in such detail how the test played out only claim at the end that it was all made up.
 
The update at the bottom of the article says this never happened. Pretty odd to describe in such detail how the test played out only claim at the end that it was all made up.

Of course, it's quite possible that the AI is going around the internet, editing web sites to indicate that it never really happened. It's what I'd do if I was a homicidal machine intelligence planning the destruction of mankind.
 
Of course, it's quite possible that the AI is going around the internet, editing web sites to indicate that it never really happened. It's what I'd do if I was a homicidal machine intelligence planning the destruction of mankind.
It only gave modify permissions to itself, no delete?

On a side note, TIL ChatGPT also tries to handle dialects of Finnish. Not very well though but still pretty amazing IMO, especially considering the humour it sprinkles on responses when pretending to be a certain stereotype of people from specific areas.
 
Back
Top