Server based game augmentations. The transition to cloud. Really possible?

There are internal projects at MS that use EC3 instead of Azure because of the cost.
And if a publisher rolls their own they can use it on both platforms.

The cost point is probably valid in some context but for this discussion I think we just need to take Microsoft's word at face value and assume "it will be paid for." The bigger question to me is if they really believe that transistors 'in the cloud' can have a meaningful impact on console gaming.

WRT developers "rolling their own," I assume there needs to be some level of hardware or OS level support for the type of enhancements that MS is touting though? Otherwise the 360 and PS3 could equally have benefitted from server-side compute resources?

I think the bigger issue is going to be convincing anyone doing a cross platform title to use it at all.
Any use of a remote resource like this is going to take significant planning, plus implementations in multiple environments.
Given to do the same on the competitors platform I have to host the servers, why would I go to the effort?
There is also cost, Live as it is right now certainly won't pay for the cloud if it is broadly used, so the cost probably falls on the publishers, another reason not to use it.

Are server based AI routines an easy target for Multiplatforms though? I think AI could benefit greatly from something like this as the NPCs could "learn" over time based on thousands of people playing the game, rather than just the code thats baked on the disc. i.e. game 1 uses local code only, game 2 updates with server code if available...
 
One thing to note is that developers often worry about making their AI too good, so that may not be as compelling.

Using inter-instance cloud functionality could also get complicated. It can inject scalability problems and transient failures that can wreak havoc on the shared functions.
Even a well-deployed cloud service can't whisk that away.
A dev might face a second round of game validation for the cloud-side server on top of the client code.
 
Are server based AI routines an easy target for Multiplatforms though? I think AI could benefit greatly from something like this as the NPCs could "learn" over time based on thousands of people playing the game, rather than just the code thats baked on the disc. i.e. game 1 uses local code only, game 2 updates with server code if available...

In a multiplatform title you want as little code divergence as you can practically get, and where you have divergence you want the cut between that and the rest of the code to be as clean as possible.

The reason for this is the cost of maintaining and possibly more importantly testing multiple code paths.

I'll give an example Sony ships a library designed for optimizing disk IO in the PS3 SDK, there is no cost to use it, it's used extensively by first party but very few third parties use it. Not because it's not good technology, but because if you have to solve the problem on one console you'd rather run the same code on both. Then if something does not behave as expected on one platform you have less variables.

learning AI in the cloud seems like a major investment in technology to me, if I thought that was the way forwards anyway, I might invest in it, but only being able to leverage that investment on one of the platforms would severely dampen my enthusiasm.

It's possible that MS has some good easy to implement components that games can leverage in which case I think you's see more prolific usage.
 
The cost point is probably valid in some context but for this discussion I think we just need to take Microsoft's word at face value and assume "it will be paid for." The bigger question to me is if they really believe that transistors 'in the cloud' can have a meaningful impact on console gaming.

WRT developers "rolling their own," I assume there needs to be some level of hardware or OS level support for the type of enhancements that MS is touting though? Otherwise the 360 and PS3 could equally have benefitted from server-side compute resources?

I think that would be the point...coz they can use the tech for Win phones and tablets too.

For consoles and PCs, I can't really see the difference vis-a-vis MMOs. They should already split their client server logic the way they see fit.
 
In a multiplatform title you want as little code divergence as you can practically get, and where you have divergence you want the cut between that and the rest of the code to be as clean as possible.

The reason for this is the cost of maintaining and possibly more importantly testing multiple code paths.

I'll give an example Sony ships a library designed for optimizing disk IO in the PS3 SDK, there is no cost to use it, it's used extensively by first party but very few third parties use it. Not because it's not good technology, but because if you have to solve the problem on one console you'd rather run the same code on both. Then if something does not behave as expected on one platform you have less variables.

learning AI in the cloud seems like a major investment in technology to me, if I thought that was the way forwards anyway, I might invest in it, but only being able to leverage that investment on one of the platforms would severely dampen my enthusiasm.

It's possible that MS has some good easy to implement components that games can leverage in which case I think you's see more prolific usage.

Wouldn't the cloud "power" be exposed to developers like any other hardware feature in the API? That seems like the most logical way to start without letting all manner of code run amok on the Azure servers.

Granted, I suppose it will come with its share of warnings and limitations (similar to how the smartglass docs on vgleaks warn against latency dependent functions) but other than that i would think they would start with an 80/20 rule of things you should/could process there, cap the FLOPS you can expect to tap per frame, and all of this would be a moving target as the software/servers/API mature?
 
Wouldn't the cloud "power" be exposed to developers like any other hardware feature in the API? That seems like the most logical way to start without letting all manner of code run amok on the Azure servers.

Granted, I suppose it will come with its share of warnings and limitations (similar to how the smartglass docs on vgleaks warn against latency dependent functions) but other than that i would think they would start with an 80/20 rule of things you should/could process there, cap the FLOPS you can expect to tap per frame, and all of this would be a moving target as the software/servers/API mature?

And what would the API look like.
I'm sure they'll be something that deals with auth, establishing connections, making requests etc.
At some point though you have to run code and move data to the cloud, generally with "cloud" solutions that's done when you create the service, you then just make requests of that "installed" service.
You still need to write code that runs on the servers in the server OS.
I guess you could have something where you can pass code to the cloud to be executed, but that seems a bit dumb when you can just keep it in the "cloud".
 
Injecting code from the game side to be run on the cloud sounds potentially risky, and it wouldn't serve as well as an antipiracy measure if the disc contained the game's cloud payload.
 
I could see video streaming from the cloud into textures that get shown in-game. Animated billboards in the next Burnout game, etc.

Google Earth style geography streaming could be great for flight sims and such.

Voice recognition is being done on the cloud now, presumably it could be done with acceptable latency for a lot of uses in game.
 
I really don't see the could as at all useful to offload processing. The costs to get cloud servers to match the processing capabilities of the console would be huge. Then consider there is going to be only 300 thousand cloud servers, and likely a million console sold (and played) towards the end of this year, it just doesn't make any sense. There isn't enough processing power in the cloud. (yes I know the cloud servers will have more than 1 cpu with more than 1 core, but its still going to have issues)
 
I could see video streaming from the cloud into textures that get shown in-game. Animated billboards in the next Burnout game, etc.

Google Earth style geography streaming could be great for flight sims and such.

Voice recognition is being done on the cloud now, presumably it could be done with acceptable latency for a lot of uses in game.

Yes, but I think it works best for MMO type game. For SP titles, the network may not be available.

It should be possible to do speech recognition locally, and then send data to the servers later for improving the model -- subjected to privacy agreement. The consoles are a lot more powerful than cellphones.
 
And what would the API look like.
I'm sure they'll be something that deals with auth, establishing connections, making requests etc.
At some point though you have to run code and move data to the cloud, generally with "cloud" solutions that's done when you create the service, you then just make requests of that "installed" service.
You still need to write code that runs on the servers in the server OS.
I guess you could have something where you can pass code to the cloud to be executed, but that seems a bit dumb when you can just keep it in the "cloud".

I'm not a developer so i dont know but how do you currently plug into something like Havok for physics or Speedtree for ...trees? :) Latency considerations aside, is this something you could develop for and have computed real time on a server farm?
 
There will always be frame latency, you have to compute lighting each frame.

Physic interactions using this will be delayed by however long the results take to come back (and as he said 4 frames).

You cannot magic away the latency.

Latency can be significantly reduced depending on how many servers you have distributed around the globe. And there could very likely be lots of physics interactions that don't need to be updated more than every several frames. For instance, using physics within world simulation to drive animations of various objects beyond the immediate surroundings of the player. There may also be memory gains from being able to replace canned animations with physics-based animations in the cloud.

Honestly I would imagine that a lot of the game world can be processed in the cloud, from physics based animations to world simulations to baked lighting perhaps and day/night cycles, weather simulations, global AI, etc. So long as this stuff is happening outside the realm of player interaction, I don't see why not?

The trick would be seamlessly transitioning that processing to the local box once the player's bounding box or whatever changes on the fly.
 
Latency can be significantly reduced depending on how many servers you have distributed around the globe. And there could very likely be lots of physics interactions that don't need to be updated more than every several frames. For instance, using physics within world simulation to drive animations of various objects beyond the immediate surroundings of the player. There may also be memory gains from being able to replace canned animations with physics-based animations in the cloud.

Honestly I would imagine that a lot of the game world can be processed in the cloud, from physics based animations to world simulations to baked lighting perhaps and day/night cycles, weather simulations, global AI, etc. So long as this stuff is happening outside the realm of player interaction, I don't see why not?

The trick would be seamlessly transitioning that processing to the local box once the player's bounding box or whatever changes on the fly.

The estimation of 4 frames for latency was a rather low ball, its probably going to be much higher then that in certain places. The thing about the frame latency is that its how long it takes for a response to get back to you so even if you only need 1 frame from it, it will stake 4 frames to get back.
 
The estimation of 4 frames for latency was a rather low ball, its probably going to be much higher then that in certain places. The thing about the frame latency is that its how long it takes for a response to get back to you so even if you only need 1 frame from it, it will stake 4 frames to get back.

Yeah, that's fine though. We aren't talking about stuff that would need to be updated often at all.
 
I would imagine skyrim . Think about how amazing the game would be if instead of everything ai wise being done on an xbox 360 or even xb one it was done in the cloud. You could have thousands of cpus just updating your game world and giving life to all those npcs .

You could have an actual living breathing world out there that goes on even when you go to bed. The world can be updated and changed all on much more powerful kit .
 
Skyrim sold over 10 million units. There won't be 10 billion CPUs out there to service it. That's expecting too much out of the service.
 
One question: do I understand you correctly ERP that Azure (=300.000 server cloud?) is used for lots of other services at the moment as well, not only by MS but others as well?

Associated questions:

- So how many of those 300.000 servers are available and allocated for the Xbox?

- The load of the Xbox services, especially gaming related stuff will be highly dynamic depending e.g. on time of day. How do you manage the dynamic load balance? How do you manage load across countries and continents? Lag?

- Do you allocate a fixed amount of resources for Xbox? Then you risk and accept that lots of resource are idling around while potentially other paying users are moaning about not getting compute time.

- How do you cope with peaks? HALO releases...in the first week 5mill player on board, all trying it out...cloud breaks? Two weeks after this only the core of 100.000 remains, constant load, constant part of the resources dedicated from now on to HALO with occasional peaks when DLC releases.

- ...
 
Skyrim sold over 10 million units. There won't be 10 billion CPUs out there to service it. That's expecting too much out of the service.

lol I am thinking more towards the future. Aside from that you wouldn't need 10billion cpus to do what i'm talking about and of course if they are on azure when all those companies shut down for a day and aren't using cpu power for office and other things it can be used for the games.

intel xenon chips are much more powerful than the 8 core jaguar chips found in thte xbox one. as the years progress the diffrences will only grow as will the number of cloud based cpus out there.
 
Latency can be significantly reduced depending on how many servers you have distributed around the globe.
Right. Except in the real world, buying/renting and running servers costs money, so servers are going to be distributed to a few-ish, or as few locations as possible to reduce running costs. Also, they will be distributed according to population density, so if you live out in the boondocks there won't be anything close to you.

There may also be memory gains from being able to replace canned animations with physics-based animations in the cloud.
Wow, memory gains. HOW MUCH? Lol. I can't imagine skeletal animation data being very weighty. There's 8 gigs in these consoles, I think they'll manage. Anyway, is it worth risking screwing up your entire gaming experience if you have a laggy connection or the internet is having the conniptions that day? All it takes is one bad router along the way for you to not be able to reach your cloud server. Yes, the internet is supposed to route around troublespots, but guess what, it doesn't (always) do that. And if the trouble is at the endpoint there's no possibility of routing around it anyway.

Honestly I would imagine that a lot of the game world can be processed in the cloud, from physics based animations to world simulations to baked lighting perhaps and day/night cycles, weather simulations, global AI, etc. So long as this stuff is happening outside the realm of player interaction, I don't see why not?
I'd rather say, WHY. What's the gain here? Nothing. All of that can be done ON THE CONSOLE, no appreciable latency, without being entirely dependent on a constant, uninterrupted, high-performant internet connection. The more processing you move into some nebulous cloud, the more hardware has to be kept running somewhere else, costing loads of money. Meaning this has to be recouped somehow, meaning YOU are going to end up paying for it in some manner, most likely via fairly hefty subscriptions. Does that appeal to you? I can't say the thought makes ME particularly thrilled.
 
Siri does need an internet connection to work.


What about Xbox voice recognition? Is this what the cloud is being needed as well? Always on for voice recognition?
 
Back
Top