Server based game augmentations. The transition to cloud. Really possible?

"Game augmentation" would always be a client-server model.
As discussed in this thread, the client is doing some of the computation and the server is doing some of the computation. If it's all happening on the server, it's just a dedicated game server and nothing relevant to the ideas of this thread.

I suppose the discussion has somewhat mutated into 'what is it possible to compute and stream over the net?' whether augmenting a game or creating it outright, as the issues are mostly the same. If the physics data is mesh and materials, the challenge is streaming that into the clients. But being completely server side does simplify things versus previously discussed augmentations.
 
There will be long stretches where full sky scrapers are not crashing down so the resources can be scaled as needed.
Is the billing model that precise eg: your(the dev/publisher) only billed per millisecond, or are you billed per day month? or is it free
 
If like AzureML, you are billed by processing time (multiples by core usage) and bandwidth. Using less cores would save money IIRC unless it was perfect scaling which didn't happen when I was messing with the sliders
 
When I set up a virtual machine with azure it was billed by the minute. Seeing that Microsoft own Azure I'm sure they can set up the billing to be favorable if it is a service they believe in. Obviously they do or they wouldn't be pursuing this. Also, the fees associated with a proof of concept game such as crackdown are inconsequential to Microsoft. Prove that it works and if it costs a little extra to run initially consider it an investment. The server side CPUs are only going to get faster and more efficient. They already have a subscription service to play games online so it can be rolled into the cost in the future when cloud compute / streaming really takes off.
 
Customers are making a one time purchase of $60 (MS gets a $35 payment after retail cut, and $60 on digital sales) for an average of 120Hrs of Azure service across the customer base.
 
As discussed in this thread, the client is doing some of the computation and the server is doing some of the computation. If it's all happening on the server, it's just a dedicated game server and nothing relevant to the ideas of this thread.

I suppose the discussion has somewhat mutated into 'what is it possible to compute and stream over the net?' whether augmenting a game or creating it outright, as the issues are mostly the same. If the physics data is mesh and materials, the challenge is streaming that into the clients. But being completely server side does simplify things versus previously discussed augmentations.

Looking at the original post, I don't see a distinction between a single workload being split between the console and the cloud versus an entire workload being shifted to the cloud. Is that what augmentation is meant to be? Either way, it's processing that doesn't have to be done locally. Is this thread about the feasibility of dedicating cloud resources per user? My memory of this thread was that it discussed issues like handling different types of workloads that would be affected by latency, and the bandwidth consumed.
 
Looking at the original post, I don't see a distinction between a single workload being split between the console and the cloud versus an entire workload being shifted to the cloud. Is that what augmentation is meant to be? Either way, it's processing that doesn't have to be done locally. Is this thread about the feasibility of dedicating cloud resources per user? My memory of this thread was that it discussed issues like handling different types of workloads that would be affected by latency, and the bandwidth consumed.
I don't think it was properly defined - but augmentation as a means of improving graphical fidelity; (though my memory is fuzzy) MS had somehow implied, or incorrectly used the term in marketing suggesting that was possible to gamers. It would appear that the general audience of gamers associates anything to do with graphics and performance to do solely with the GPU and that alone may be the cause of the misconception. For instance how many times do we have to read on Neogaf, Reddit, and sometimes here that someone would love to lower the resolution from 1080p to 720p and gain double the frames. I see it all the time.

just reading off neogaf, appears to be an interview online about how servers are leveraged: http://www.gamepro.de/xbox/spiele/xbox-one/crackdown-3-/videos/51200,84163.html

and the interview is awesome ;)
 
Last edited:
I don't think it was properly defined - but augmentation as a means of improving graphical fidelity; (though my memory is fuzzy) MS had somehow implied, or incorrectly used the term in marketing suggesting that was possible to gamers. It would appear that the general audience of gamers associates anything to do with graphics and performance to do solely with the GPU and that alone may be the cause of the misconception. For instance how many times do we have to read on Neogaf, Reddit, and sometimes here that someone would love to lower the resolution from 1080p to 720p and gain double the frames. I see it all the time.

just reading off neogaf, appears to be an interview online about how servers are leveraged: http://www.gamepro.de/xbox/spiele/xbox-one/crackdown-3-/videos/51200,84163.html

Oh yah. I forgot about the graphical augmentation thing.
 
..
just reading off neogaf, appears to be an interview online about how servers are leveraged: http://www.gamepro.de/xbox/spiele/xbox-one/crackdown-3-/videos/51200,84163.html

and the interview is awesome ;)

That is not nearly as straightforward as it looks. I wonder why they chose to break up the world into regions that run on different servers vs having one large server that can handle the entire world. I know it's all virtualized resources. Copying data for a physics-based object as it transitions from one region to another must be expensive in terms of performance.
 
That is not nearly as straightforward as it looks. I wonder why they chose to break up the world into regions that run on different servers vs having one large server that can handle the entire world. I know it's all virtualized resources. Copying data for a physics-based object as it transitions from one region to another must be expensive in terms of performance.
Perhaps it's segmented this way so that when it's 16+ players destroying the world simultaneously the load is consistent for each server region, it could draw similarities to MMORPG where each zone/town/area is handled by a separate blade/server.
 
Better version of the same talk

This is amazing! He also qualified the "power" as an increase relative to the CPU time used just for physics on the console. So if they only use 30% (no idea) of CPU locally then each power increase is based on that. A 10 fold physics increase would only require equivalent of 2-3 Jaguar CPUs. I'm curious what percentage of local CPU time you guys think would go towards physics?
 
Perhaps it's segmented this way so that when it's 16+ players destroying the world simultaneously the load is consistent for each server region, it could draw similarities to MMORPG where each zone/town/area is handled by a separate blade/server.

In the MMORPG I play FFXIV, each server represents 1 world in which thousands of players reside. However I suspect that 1 region, ie. USA consists of a couple of servers in that region of the world hosting a couple thousand players.

They don't split the zones in the game up into different servers.
They divide the whole world up into instances. So you have 1 server for the game world (all players on your server reside here), 1 server for dungeon instances (only for 4/8/24 player dungeon instances). The dungeon instances can collect players from different servers in the same region.

So what I think they've done for Crackdown 3 is this:

Each area/zone of the city is split up into server instances.
So the game world online (I'm guessing it's limited to co-op online only), isn't on 1 big server.
Each server in each zone of the game world has a certain computational limit, and I'm guessing whenever destruction from 1 zone enters another zone, it's taking some of that load off.

In the final game, they may use the same structure ie. each zone in the city has it's own server, however it will be able to create multiple instances for other players who will be in their own session with friends etc.
 
Watching the video, it seems like there are about 4 buildings to a server. A fairly small region of play space. Could be wrong. Mind you, these aren't physical servers. They're virtual servers. They could be running on one physical server for all we know.
 
In the MMORPG I play FFXIV, each server represents 1 world in which thousands of players reside. However I suspect that 1 region, ie. USA consists of a couple of servers in that region of the world hosting a couple thousand players.

They don't split the zones in the game up into different servers.
They divide the whole world up into instances. So you have 1 server for the game world (all players on your server reside here), 1 server for dungeon instances (only for 4/8/24 player dungeon instances). The dungeon instances can collect players from different servers in the same region.

So what I think they've done for Crackdown 3 is this:

Each area/zone of the city is split up into server instances.
So the game world online (I'm guessing it's limited to co-op online only), isn't on 1 big server.
Each server in each zone of the game world has a certain computational limit, and I'm guessing whenever destruction from 1 zone enters another zone, it's taking some of that load off.

In the final game, they may use the same structure ie. each zone in the city has it's own server, however it will be able to create multiple instances for other players who will be in their own session with friends etc.
Right, I wasn't thinking regular MMORPG - last MMO I played was EvE online which only has.. well 1 instance for all players to play in, so they divided the galaxy up by each zone. But you're right, ugg, how could I forget lol. I think for WoW they have 1 server with multiple blades handle a whole world, that way when there is a huge battle at Crossroads, or a town, only that area would suffer from immense lag, while everywhere else ran smoothly.
 
I was wondering if each server(virtual server) could be assigned to a specific area. It's responsible for calculating every instance in it's region. If something is going to leave it's region it sends the instructions to that server so it can be ready for that item to arrive and to calculate it's interactions with things already in that region. It seems they stress the idea of items being physics based. So each server has the ability to handle a certain amount of destruction in it's area. It scales for instances where more power is needed.

Going back to the asteroid belt demo they say an Xbox One can handle 40,000 and the adjustments needed. With a server it jumps to 240,000. There claim was 800,000 adjustments a second iirc. With each server handling a region once destruction starts it seem 200,000 extra calculations a second would handle most situations.

The other thing with the regions passing items on would fit in with some of Microsoft's Azure marketing information. They often speak of items passing seamlessly between servers in their QoS talk. Mainly about server failure, but why not on purpose.

But however this works, isn't this going be so much better as we move away from current micro architecture and towards HSA and things like VISC?
 
Only during the development period. Not for live.

I thought there was some token allocation allowed, something like an azure small instance for x concurrent users and then a scale upwards depending on users.

From what i remember reading there was no mentuon of dev / live but it seemed very specific for dev when ultimately at data centre scale dev work usage is effectively nil.
 
Back
Top