Using AI to improve games

Cyan

orange
Legend
Supporter
As of recently I read the news about this Resident Evil HD Remaster mod which uses AI to enhance the game's 2500 backgrounds and is already a must-have for fans.


This new mod that is essential for all the fans of the series, uses AI to improve the game's 2,500 backgrounds and leaves the game with outrageously beautiful visuals.

The modder nicknamed 'Arcturium' is the author of a High Definition texture pack for the improved version of the classic horror adventure created by Shinji Mikami.

The project is called 'Rescale' and it greatly improves the Resident Evil remake released in 2015 , reviving the Resident Evil experience with an essential mod.

In detail, this 'Rescale' is a mod that is also a background restoration and that aims to bring the fidelity of the original backgrounds of the game to the current version on PC.

In turn, he comments that it is " a spiritual successor to Shiryu's REUpscale project " in version 1.0 of the project that -perhaps- can even be improved over time.

The modder says that the 2,500 backgrounds have been replaced with a restored and refurbished version that should be of noticeably higher quality.

On the mod page you can see many more details of the project, real-time comparison images and other comments from its creator; including future versions to improve the mod.

In this sense, are you returning to Resident Evil? Which other games have similar mods that improve a game using AI, be it behavioral changes or graphics or physics, etc, changes?
 
This comment got my attention

"Some rooms in the PC release were made fully 3D for some reason. Not sure why but these rooms have not been altered in any way. I don't know if it's possible to upscale the textures in these rooms, but I didn't find them when extracting the game files for this project."

Is this the same in consoles? I wonder which rooms are these

Edit:
It is appears that at some point the game was going to be fully 3D on GC? Someone says that the reason they went with Prerendered at the end was to reduce the size to fit into the GC's minidisc. Unsure if true, but if true I wonder how close it would have looked to the final version.

Maybe it's those 3D environments they reused for the PC version
 
Sorry, wasn't sure where to post this but just watched this video on Meta 3d Gen (text to 3d modelling and texturing) from July 2nd. Ok, it takes over a minute to render these scenes, but honestly, what tech are we going to have at our disposal in years to come - pretty mind blowing to me. Surely its just a few steps away from being able to type in (or speak) 'hey, create me a golf game somewhere in space' and you have a fully playable game ready to go. Fascinating

 
Sorry, wasn't sure where to post this but just watched this video on Meta 3d Gen (text to 3d modelling and texturing) from July 2nd. Ok, it takes over a minute to render these scenes, but honestly, what tech are we going to have at our disposal in years to come - pretty mind blowing to me. Surely its just a few steps away from being able to type in (or speak) 'hey, create me a golf game somewhere in space' and you have a fully playable game ready to go. Fascinating


Papers like these are popping up a lot in conferences, and I'm excited. I've wanted this since I saw it as a kid in Star Trek Voyager, there were some episodes in towards the end showing them creating a holodeck program and this was basically it with voice control. "Give me a mid 18th century irish villager, taller, more handsome... make him a bit scruffy"

This is way more exciting to me than large language models, either as chatbots or attached to NPCs. Those seem like tech demos without practical use, whereas as creating assets out of nothing is super cool, even before the assets get good it'd be great to see this running one of these "user generated content" games kids today enjoy so much. Being able to just ask for something is exactly the sort of interface missing in these.
 
There's been a mod for Resi Evil (And loads of other old games that use pre-rendered backgrounds) for a few years now, the Resi mod is called 'Seamless HD Project' iirc and is pretty much final.
 
Creating a full 3D scene is a complex, time-consuming task. Artists must support their hero asset with plenty of background objects to create a rich scene, then find an appropriate background and an environment map to light it. Due to time constraints, they’ve often had to make a trade-off between rapid results and creative exploration.

With the support of AI agents, creative teams can achieve both goals: quickly bring concepts to life and continue iterating to achieve the right look.

In the Real-Time Live demo, the researchers used an AI agent to instruct an NVIDIA Edify-powered model to generate dozens of 3D assets, including cacti, rocks and the skull of a bull — with previews produced in just seconds.
 
Last edited by a moderator:
Seems kinda similar to those stable diffusion unreal engine plugin thingy, but in a much more polished and integrated manner.

Then yeah, it boosts prototyping speed and design drafts considerably.

Then polish them again with yet another AI, then just a bit of manual touch up.

At least according to some people that shared using this kind of tool in unreal engine.
 
some people are doing incredible things like adding AI to Skyrim VR (VR makes it even better 'cos of gestures) using mods, like in this case, the mod Mantella that lets you play the game with AI generated dialogues and have natural conversations with AI. The result is outwordly. I admit it, I cried -out of laughter-. And also got very impressed.

Some dialogues are super smart -like when he asks the AI to recite a known ode in medieval style language-, and others are just surprisingly intelligent like when the guy tells what kind of life could be beyond the stars -and the AI tells him that.., honey there are enough strange things in this world without searching for them beyond the starts- :ROFLMAO: , to some other downright hilarious dialogue.

Skyrim just becomes an even better game with this.

In this case the guy plays the game as the father of Rufus -his son with Lydia-, Lydia is his wife and calls him "honey", and he is a bit of a rudimentary guy which uses some kind of redneck accent. This video in english would be truly golden. :D Maybe you can catch a glimpse with subtitles on --they are quite good.


p.s. the video is in spanish, not my maternal language but I live in Spain and I understand everything.

Some very fun dialogue lines all over the place. There is a point where the guy tells Lydia to say "Mesopotamian chauvinist!" every time he says something sexist to her.

And when he says something sexist the AI responds amazingly well replying "Mesopotamian chauvinist!". When his wife Lydia replies that accurately knowing that he said something sexist, the guy, totally amazed by the AI, says:

Bravo! Bravo you! Bravo your family! Bravo your lineage!

Blessed be the day I married you. And blessed be the day I die by your side. Ole!!!!!!!!
:ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO:

The AI is even capable of creating rhymes. At some point the guy says: "Lydia, where is Calaverin?" (Calaverin -little skull- is a nickname for his son Rufus), and Lydia replies via AI: "In my dickie, which is small-ing" -sry that's the better translation I can type. He originally said: "Donde esta Calaverin?" to which Lydia replies: "En mi pito, que es pequeñin". :ROFLMAO:

Too many hilarious situations -Skyrim without AI has quite a few, but this is a whole different level!-

This is the mod.

 
Last edited:
It seems like it could really change open world games. I'm sure it is thoroughly being explored.

However this tech could also enable single player games to conveniently need a monthly fee, be permanently connected and bring user data collection to a new level.
 
Last edited:
To me 90% of this stuff is making games worse, not better. Upscaled background or textures in older games are probably solid use case but even then you need a lot of manual labor to fix some of the stuff. Images which originally had some text or logos are turned into an absolute mess by AI. This was the case with GTA and is also true for RESCALE from OP's post. You can absolutely use LLMs to update some graphics but there has to be a skilled professional in the loop who catches and fixes problems.

As for 3D - most of the stuff shown to date is, IMO, useless garbage. AI can't generate content that looks consistent, the quality of the mesh is abysmal. And yeah, you can render any dense crap with tech like Nanite (not that you should but you can) but good luck finding someone who's willing to tweak anything on the mesh generated by AI, whether geo itself or textures/materials. There are valid use cases for LLMs in 3D space but like always tech bros with zero experience in content pipelines build stuff they think has value. And equally clueless people on the sidelines eat it up.

Improve quality of the photogrammetric output with your model. Or better yet simplify some tedious task like UV unwrapping. It would save so much time and money if UVs were unwrapped without the need for manual seams or semi-automatic straightening of edges. It would be even better to have a model that figures out which parts of model need trim sheets and which need unique textures. But to build something like that you have to know anything about content creation and that's much more work than import openai. ;) Or some smarter remesher that understands how you want quads to flow, that would be nice too.
 
Last edited:
It seems like it could really change open world games. I'm sure it is thoroughly being explored.

However this tech could also enable single player games to conveniently need a monthly fee, be permanently connected and bring user data collection to a new level.

A locally run LLM could be a solution to the monthly fee problem, and I think it's not impossible as the LLM does not to be that large for a game (e.g. you don't need the LLM to be able to write a quick sort function in Python).
On the other hand, training a LLM is expensive. I don't know how difficult it is to train, for example, a LLM for the Elder Scrolls universe.
 
I noticed on Steam that some newer games have a section "AI Generated Content Disclosure". I'm not sure when that popped up. I don't think I've played anything that I noticed content that looked fishy but there must be some examples.
 
A locally run LLM could be a solution to the monthly fee problem, and I think it's not impossible as the LLM does not to be that large for a game (e.g. you don't need the LLM to be able to write a quick sort function in Python).
On the other hand, training a LLM is expensive. I don't know how difficult it is to train, for example, a LLM for the Elder Scrolls universe.
Something like chatgpt is so large it would only fit into the memory size of linked GPUs.

So think A100. It can’t fit into a single GPU, which is why we don’t see local use of chatgpt.

I’m not sure if cutting down a LLM would be sufficient even if you provide a data source to do some of the things we are discussing, and you’d have to cut it all the way down to a micro LLM as you need to fit the rest of everything else in the game in vram.
 
Surely a micro LLM is just an LM?!

Wouldn't that be possible on a predetermined subset of language options, limiting the variety of responses but drastically reducing size?
The idea of the LLM is to comprehend language, not that we are issuing commands to it and it is to understand commands.

We can train an LLM, and provide it memory, in this case a source file, and we can ask it to do things with a source file like any other person would. That’s what they are doing. They are understanding the context of language enough to be able to answer your questions and even interpret your questions.

Something like chatgpt 1 can be really small. Maybe 2GB of vram for inference.

ChatGPT 3.5 is 350GB of VRAM for inference IIRC.

So the more it is taught, the more comprehension ability it can provide.

With the 4o versions of ChatGPT and Claude and genesis. You can provide a large number of source documentation and ask phd level problems and after a few minutes it can come up with the answer.

That’s where they are at with these LLMs today, we are sort of on the tip of a major revolution.
 
Character dialogue as a service is a more likely scenario than embedding models on a client machines (for now at least) but, again, I don't think it's something worth investing in. What's the point of paying for experience that wasn't designed and curated and instead relegates narrative responsibility (wholesale or partially) to a dumb chat bot? Why would anyone want this?
 
To me 90% of this stuff is making games worse, not better. Upscaled background or textures in older games are probably solid use case but even then you need a lot of manual labor to fix some of the stuff. Images which originally had some text or logos are turned into an absolute mess by AI. This was the case with GTA and is also true for RESCALE from OP's post. You can absolutely use LLMs to update some graphics but there has to be a skilled professional in the loop who catches and fixes problems.

As for 3D - most of the stuff shown to date is, IMO, useless garbage. AI can't generate content that looks consistent, the quality of the mesh is abysmal. And yeah, you can render any dense crap with tech like Nanite (not that you should but you can) but good luck finding someone who's willing to tweak anything on the mesh generated by AI, whether geo itself or textures/materials. There are valid use cases for LLMs in 3D space but like always tech bros with zero experience in content pipelines build stuff they think has value. And equally clueless people on the sidelines eat it up.

Improve quality of the photogrammetric output with your model. Or better yet simplify some tedious task like UV unwrapping. It would save so much time and money if UVs were unwrapped without the need for manual seams or semi-automatic straightening of edges. It would be even better to have a model that figures out which parts of model need trim sheets and which need unique textures. But to build something like that you have to know anything about content creation and that's much more work than import openai. ;) Or some smarter remesher that understands how you want quads to flow, that would be nice too.
having heard some of those opinions, I can quite understand the concerns. Plus, those who like a more streamlined experience, wouldn't like that. A fine grained experience is what most people want, even myself. That being said...

In a game like Skyrim where roleplaying and imagination play a huge factor -I used to create stories around my characters with my best friend at the time, a british person who also liked that- and you can build a house, marry, have a child, etc, and NPCs go on their daily routine and errands..., it might work.

Just have a few real life voice actors play the default lines like vanilla Skyrim and use those to get AI enhanced dialogues related to the lore and so on as the game progresses to get the conversations going. (or have a counter on how many times a sentence have been said to add more variety at some point)

Many mods have been built around creating more interesting followers and NPCs, without the AI. Sophie -iirc- was one of them, among others.

https://www.nexusmods.com/skyrimspecialedition/mods/2180

edit: I remembered wrong, it was Sofia not Sophie .
 
Back
Top