Could future games integrate cloud based AI such as Open AI?

Theeoo

Regular
Rather than having a static dialogue tree for each character in let's say, Elder Scrolls 6, you could actually have dynamic conversations, and daily routines of NPCs would be based on changing events within the game world itself, that would be unique to each running instance of a game.

Something like this.

 
I apologise for the redundancy in the thread title.
Maybe a little better now? If you think of a more refined title, just let us know what you'd like it to be.
 
Rather than having a static dialogue tree for each character in let's say, Elder Scrolls 6, you could actually have dynamic conversations, and daily routines of NPCs would be based on changing events within the game world itself, that would be unique to each running instance of a game.

Relying on cloud was something Microsoft wanted to do more on with the Xbox One but the downside to a game having to rely on servers outside of traditional multiplayer or co-op experiences is having the extra development for that second (server) platform and the continued ongoing resourcing for support and cost for as long as anybody might want to play the game.

You need to account for the worst possible connection anybody might have, and that might include people play on the go or from very remote places where they accept that multiplayer and co-op are not viable, but who may not anticipate single player games also presenting with issues as well.
 
AI assisted content creation is almost a given. It does not have to be done in real time though. For example, it could generate thousands of different conversations and randomly assign them to millions of players. Most people won't see the same conversation twice.
 
AI assisted content creation is almost a given. It does not have to be done in real time though. For example, it could generate thousands of different conversations and randomly assign them to millions of players. Most people won't see the same conversation twice.
I agree, what is les clear is the longevity of the need for server-side processing from point of release.

I think this is part of the challenge vis-a-vis longevity, what may be tough to do today on local hardware but may trivial in a few years, but for the publisher how long are you going to keep those servers running and if more and more games demand server-side resource the pinch point in terms of cost will be server resources. This could be mitigated if devs patch games and shift what was previously only achievable on the server to local hardware but I think it's the need to support games on one platform or another (servers, or patching in server-side code to local) is what make it less desirable.

With some exceptions, most single player games get released, supported for bug fixes and for a while, but if there is not DLC, quickly left behind as the developers move on to the next project.
 
Fs2020 is one of good example of online offline
Yeah, and this is a tentpole title that I think most would agree, both requires and benefits from that server side aspect.

In terms of dynamic AI-driven conversations, I'm not quite sure what the complexity is that would preclude this happening in realtime - noting that you can download really good AI bots that consume next to no CPU usage at all, and given you're in a. conversation with NPCs, you're probably also not fighting three dragons and running around so the game probably has some excess resources available.

I do think server side will play a bigger part in games in time, but there remains that issue of how long games be remain playable because they rely on perpetual server side support and runtime, both of which cost money. If Oblivion used this tech, would it still be playable now? Would Bethesda then Microsoft still be reserving server instances for those who wants to play 16 years after release? ¯\_(ツ)_/¯
 
In terms of dynamic AI-driven conversations, I'm not quite sure what the complexity is that would preclude this happening in realtime - noting that you can download really good AI bots that consume next to no CPU usage at all, and given you're in a. conversation with NPCs, you're probably also not fighting three dragons and running around so the game probably has some excess resources available.

There is a myriad of IP/copyright/legal related issues/questions which at the moment which may steer implementations to be cloud based.
 
There is a myriad of IP/copyright/legal related issues/questions which at the moment which may steer implementations to be cloud based.
What kind of IP/copyright/legal issues don't apply to software running on servers versus PCs and consoles?
 
What kind of IP/copyright/legal issues don't apply to software running on servers versus PCs and consoles?

AI related laws currently are very unsettled and may likely take some time to work out, as such I see (and you currently do see) many companies choosing the lowest risk approach.

Just some examples -

There could very well be a different interpretation of legal ownership depending on if the content is generated in the cloud (essentially not involving the user in anyway) versus local and therefore involving the users resources to some degree.

There's also questions still with respect to the data sets used to train these AI models, and companies may want to keep those as black boxed as possible to avoid scrutiny, which would involve not having them client side. Not to mention the potential high value of the IP involved in the model itself.

Also from a developer perspective are game developers all actually going to develop and train their own models or use a third party middle ware provider? If the latter than implementation will likely be depend on their terms as well.

Just off hand my impression currently is that inertia seems to be favoring cloud implementations for commercial/proprietary implementations of analogous products.
 
Last edited:
@arandomguy I didn't follow any of that. If companies are using stolen code or IP, it's will not make any difference where that code is used for there to be copyright or intellectual property violations. They will be a target for legal action in regions they are established where those laws exist. This is why the TPB operates for particular territories and everywhere else, regional mirrors pop and get closed down like a game of whack-a-amole.

But using your example, I cannot see Microsoft (Bethesda) doing this.
 
IMHO that's actually one of the reasons why game developers aren't going to use AI to generate content in real time yet. It's not about technology but for legal reasons.
In the short term I think we'll see AI generated contents being used but still handled by actual people. For example, AI can help write thousands of dialogs and you still want to have people to look at them before releasing them in your game. The same goes for images and 3D models. It's still a win because filtering contents is easier (and probably cheaper) than creating contents.

One thing we are already seeing in games is computer generated voices. It's not very controversial and it's approaching "good enough" level.
 
IMHO that's actually one of the reasons why game developers aren't going to use AI to generate content in real time yet. It's not about technology but for legal reasons. In the short term I think we'll see AI generated contents being used but still handled by actual people. For example, AI can help write thousands of dialogs and you still want to have people to look at them before releasing them in your game.

I'm will not to pretend to know the ins and outs of the legal arguments around AI, where arguments about content produced by machine learning being 'original' are countered by the algorithms having been trained by analysing content that itself carries protections and arguments about 'derivative works' but in the majority of territories (Europe, Americas, many parts of Asia) there already exists legislation that makes companies liable for 'making available' content, the where and how are immaterial.

It's definitely an interesting discussion and the law is woefully behind the times. Given the way copyright and other work protective law has developed I'm not convinced legislation will be sensible, but just create more for the lawyers - funny how they always seem to be the case.
 
I want cloud augmented physics and particles more than anything else right now.

Just give me a game where everything has high fidelity physics. Realistic clothing, hair, and kinematics, as well as ultra realistic fluid simulations all working together.. as well as destruction amped up to ridiculous degrees.

For me, it's that micro detail that you get with particle systems which is what really elevates a next generation visual showcase from the previous generation.

AI improvements for the future like speech, pathing, decision making are going to be incredibly interesting going forward.. but kinda scary. Imagine a game like Detroit or something where you're put in these situations and you have to make all these choices, and the game already has made a profile of you based on your real world actions from real world databases.. such as products you've purchased, food you buy, what your job is... and then AI characters alter their behaviors and speech based on all those factors making the game much more personable and intimate..

Kinda scary.. but intriguing.. lol
 
I want cloud augmented physics and particles more than anything else right now.

Just give me a game where everything has high fidelity physics. Realistic clothing, hair, and kinematics, as well as ultra realistic fluid simulations all working together.. as well as destruction amped up to ridiculous degrees.
Latency will probably be the biggest challenge at the moment. If some of the visuals are rendered locally but some of the physics are calculated on a server, then at 60fps that's a tall order for 16ms, i.e. maximum time and bandwidth to send data to the server, have it calculated, and returned in time for rendering.

Comparatively speech, something with natural pauses, is something you can get away with.
 
Latency will probably be the biggest challenge at the moment. If some of the visuals are rendered locally but some of the physics are calculated on a server, then at 60fps that's a tall order for 16ms, i.e. maximum time and bandwidth to send data to the server, have it calculated, and returned in time for rendering.

Comparatively speech, something with natural pauses, is something you can get away with.
You might not have to calculate them every frame. They could also run the simulation ahead of rendering in some cases.

I'm sure there's lots of little things they could do to mitigate the latency issue... but yes that's a challenge.
 
You might not have to calculate them every frame. They could also run the simulation ahead of rendering in some cases.
Using Minority Report precognition by three psyhics in a pool? :runaway: That is the sign of good net code, you lose a few packets here and there and objects continue travelling along their pre-existing and otherwise pre-determined path.

But if you want good clothes and hair physics on a person moving, rolling, diving, jumping, fighting, shooting etc, then that would be a challenge to pre-calculate in advance and yet also be super accurate. Losing packets/info will probably result in amazing bad hair day moments and characters looking like you have ferrets under your clothes!

I'm sure there's lots of little things they could do to mitigate the latency issue... but yes that's a challenge.

It's worth doing just for comedy value!
 
Back
Top