Server based game augmentations. The transition to cloud. Really possible?

It's a great, optimistic demo, but sadly not all that useful until we know the specifics. I suppose the data details can be kept secret if part of the development is finding ways to pack massive amounts of particulate data (Vec3 position + rotation for each block, assuming each particle piece is premodelled and not computed realtime requiring mesh data to be included) and this software solution will be part of MS's cloud advantage. 30,000 particles, 32 bit floats per value, would be 192 bits * 30k = 5.5 MB of data per frame, or needing a 170 Mbps connection. MS would need a way to condense that data into less than 1/10th to be viable for real users.

With velocity vectors they don't need to send all the data every frame, just make some corrections over time, and since the debris are a visual effect, and not other player whose interactions should be accounted too, if by any chance the user decides to interact with any of the pieces they can just ignore cloud updating of that particular object and compute locally.

And then there's prediction. They can tell where the missile is going to hit the building way before it actually does, so you can even have a head start there.
 
With velocity vectors they don't need to send all the data every frame, just make some corrections over time, and since the debris are a visual effect, and not other player whose interactions should be accounted too, if by any chance the user decides to interact with any of the pieces they can just ignore cloud updating of that particular object and compute locally.

And then there's prediction. They can tell where the missile is going to hit the building way before it actually does, so you can even have a head start there.

But the challenge of physics is that the majority of those objects are likely interacting with each other on most frames and so will need updates passed to the renderer every frame. If the objects were widely dispersed and couldn't interact with each other I'd agree with you but this is quite clearly a building collapse so all of the objects are well within collision range of each other.

If this is what it seems to be perhaps they are performing some kind of cull on the physics data before sending to the client? Such that the cloud calculates all of the physics correctly but only sends a simplified vector for objects it knows will be invisible to the player. Of course I'm instantly reminded that would require the Z-buffer to be sent to the cloud and for the cloud to do a huge amount of culling wither of which could sink my vague idea of how this might work. I do hope MS offer more detail before Build ends.
 
If this is what it seems to be perhaps they are performing some kind of cull on the physics data before sending to the client? Such that the cloud calculates all of the physics correctly but only sends a simplified vector for objects it knows will be invisible to the player.
That's a good point, that the data only needs to be for visible particles. 1000 particles on screen may be all that's needed. And if we add LightHeaven's idea about sending velocity vectors instead of actual placement, we can get the data right down. Let's say 16 bit floats for linear and rotational velocities (don't know if that's viable!!)*. That'd be 16*6 bits per object, or 96 bits. That'd be 2.8 Mbps for each 1000 objects updated at 30 fps. In the case of the exploding building with 20,000 particles on screen, you could only show the major 1000-2000 as the rest would be behind. Culled objects would still be relevant as the player moves.

I'm sure some of the devs here have much better, more informed ideas how such particle physics data could be optimised for this demo.

* Maybe 16 bit positional data would be accurate enough as long as the computations were full resolution on the server?
 
But the challenge of physics is that the majority of those objects are likely interacting with each other on most frames and so will need updates passed to the renderer every frame. If the objects were widely dispersed and couldn't interact with each other I'd agree with you but this is quite clearly a building collapse so all of the objects are well within collision range of each other.
I agree, initially there's a lot of potential for collisions, as time passes each of the debris is less likely to collide with another one. If they track the likeliness of a collision over the next few frames they could also prioritize sending first data for the ones that will collide.

They also can have quite a headroom for buffering data... They can start calculating as soon as the missile is launched, and even when the explosion happens you could still be playing buffered data at the right time, while receiving data for the next few frames... There's ton of potential ways they can get the data/frame down, without compromising the simulation (at least in theory, I have no idea how well they would actually work or not XD).

If this is what it seems to be perhaps they are performing some kind of cull on the physics data before sending to the client? Such that the cloud calculates all of the physics correctly but only sends a simplified vector for objects it knows will be invisible to the player. Of course I'm instantly reminded that would require the Z-buffer to be sent to the cloud and for the cloud to do a huge amount of culling wither of which could sink my vague idea of how this might work. I do hope MS offer more detail before Build ends.
Perhaps. The culling could be done client side too. Once the cloud sends the first info describing the number of debris and their initial position the client can predict where the user will be looking at the next few frames and request only updated data for them.
 
Article on why Titanfall is the big test of Microsoft's original cloud vision.
So the original vision was to use dynamically allocatable computing resources to run instances of multiplayer FPS servers which are not substantially different from the ones Carmack developed nearly two decades ago? :) (Apart from a little more "prediction" so the damned HPBs can shoot me long after I cleared a corner.)
 
Pretty cool demo by Microsoft of using the cloud for physics calculations! https://www.youtube.com/watch?v=QxHdUDhOMyw&feature=youtu.be
I wonder what kind of connection you need to run that smoothly and I wonder how optimised/un-optimised the code is for the CPU to have such a hard time in the offline test, but if that's one of the ways the cloud can improve games that's fine with me.

In addition, I wonder if they are real physics or just somewhat more unpredictable but scripted to some extent, because physics in realtime don't seem to be feasible on the cloud.

So the original vision was to use dynamically allocatable computing resources to run instances of multiplayer FPS servers which are not substantially different from the ones Carmack developed nearly two decades ago? :) (Apart from a little more "prediction" so the damned HPBs can shoot me long after I cleared a corner.)
I am still thinking inwardly what Carmack development are you talking about, you made me very curious about it, actually.

For now, I know that Drivatars use the cloud, Powerstar Golf too, and other than that, the bots in Titanfall, but we have seen very little actually tangible about it, tbh.

However, Spencer said this on the matter.

p3.png
 
There was this slide in that build presentation. I guess notable to narrow some things down a bit to something tangible, such as again specifying CPU work. And also "more of a preview at this point".

BkaqTo7CQAA5yYu.png


Somebody also said this on twitter along with the picture, I'm assuming it was said in the presentation.

Xbox Cloud Computing - From Build 2014. Titanfall logic, even the tutorial, runs on cloud, and not the box itself.
 
My first thought was, and how will that run when millions are destroying buildings all over the world even the cloud isn't endless powerful.

Easy, we solved this in the 1950s....time-sharing! You get a ticket in every box that allows you to register for cloud gameplaying time, I hope I don't get 3am on a weeknight! ;)

In all honesty I would expect us to see a lot of the same abstractions and tricks used today on physics locally to wind up being used in the cloud also. I'd imagine each user is going to get a set performance slice per user and it will be up to the developers to ensure that their code doesn't consume more than allocated (likely with financial penalties like a mobile data plan).
 
My first thought was, and how will that run when millions are destroying buildings all over the world even the cloud isn't endless powerful.

In a multiplayer game physics results for a single detonation could be used by all players. So if something such as leaving explosives anywhere on a map, i.e... blowing up huge areas like bridges, skyscrapers, etc. in Battlefield then one set of calculated results could be used for all players. If you have 64 players on a server, it would be very little detected CPU time per person playing to make that happen!

You have to admit.. the pre canned detonation events in Battlefield are pretty limited by comparison to what this method could accomplish.
 
Although in that example, it's nothing a dedicated game 'server' (potentially using distributed computing across the cloud) couldn't achieve, so it's not quite cloud augmentation. The alternative would be a console hosting the game but using the cloud for some computations, which seems a bit odd to me.
 
Although in that example, it's nothing a dedicated game 'server' (potentially using distributed computing across the cloud) couldn't achieve, so it's not quite cloud augmentation. The alternative would be a console hosting the game but using the cloud for some computations, which seems a bit odd to me.

I thought this is exactly what we have been describing the cloud to be? Dedicated servers that can spawn new instances on demand. We are looking for situations that are latency tolerant. A 64 player battlefield game with "massive" CPU resources available for brief periods of time on demand to accommodate these detonation events is a good use.
 
I thought this is exactly what we have been describing the cloud to be? Dedicated servers that can spawn new instances on demand.
Nope. That's just conventional dedicated servers upgraded to distributed computing which is far more efficient and cost effect. The idea of this thread is exploring what cloud computing can bring to the local game. The moment you shift game mechanics to a server, you change game structure.

Cloud augmentations is up against a couple of alternative internet solutions, those being dedicated game servers and game streaming. To compare an example of a massive 'living world' game like Fable:

1) Dedicated server. You log on and play online with the server calculating world activities and updating your local copy accordingly. eg. World of Warcraft, with or without the multiplayer (only if you're going online, you'd no doubt rather serve lots of people with your server)

2) Game streaming. The game is played and rendered on the server and the video only is streamed to your display

3) Cloud augmentations. You play the game on your local games console, but when you connect it to the interwebs, it receives game content, like updated world happenings, maybe new terrain that was calculated in the cloud simulating environmental physics.

The applicable uses and restrictions for option 3 are what this thread is all about (there being another thread for talking about game steaming IIRC). With some ideas for shifting workload to the cloud, they end up making more sense just running an online server. eg. A racing game that computes car physics online and sends detailed data to the console to include in its local physics solver. It'd be quicker and easier to calculate the entire car physics/gameplay online and just send car position to the console for rendering. If online is too slow and laggy to support a good racing experience like this, it'd proper too slow and laggy to support physics augmentations.
 
I still fail to see the difference between 1 and 3. In both cases the game is running locally and receiving inputs from servers on the web. Trying to understand the distinction between the inputs being server calculated "world activities" vs receiving "world happenings" or blowing up buildings vs deforming terrain.
 
In a multiplayer game physics results for a single detonation could be used by all players. So if something such as leaving explosives anywhere on a map, i.e... blowing up huge areas like bridges, skyscrapers, etc. in Battlefield then one set of calculated results could be used for all players. If you have 64 players on a server, it would be very little detected CPU time per person playing to make that happen!

You have to admit.. the pre canned detonation events in Battlefield are pretty limited by comparison to what this method could accomplish.

Now we are back to multi-player games, which may be perfectly valid. And in any case would be easier to solve and really not what I would consider groundbreaking.

It's in single player games I want to see the difference..
 
besides the cloud compute ratio per user, I still don't why the same principles can't be applied regardless if its multi-player or single-player.

and even still, the same server side provided inputs could be sent to different players without them knowing about each other or being in the same context.
 
I still fail to see the difference between 1 and 3. In both cases the game is running locally and receiving inputs from servers on the web. Trying to understand the distinction between the inputs being server calculated "world activities" vs receiving "world happenings" or blowing up buildings vs deforming terrain.

OK I can see how these can be confusing so let me try and elucidate the differences between scenarios 1 & 3.

Scenario 1 (Dedicated Server): The game state is held remotely and the client sends updates to the server that may be accepted or rejected. A good example are the 'rubber banding' issues w/BF4, in that case the server is rejecting the users movement and resetting the player a few feet back. It will do this for packet loss, if the server load is too high it will perceive the player as having too high a movement speed or because you're EA and you cheaped out on your server infrastructure.

Senario 3 (Cloud Offload/Compute): The game state is held locally and only portions of the game systems are offloaded to the server. So while I may pass physics over to the remote server more of the systems are held locally and they are usually authoritative (ie when server & client disagree, server loses). This means that for certain tasks the immediacy and consistency of local play are preserved as crappy network performance shouldn't result in the kind of issues seen w/online play on dedicated servers. Where things get messy is how much do I rely on the remote resource and the more I rely on it the more vulnerable I am to the vicissitudes of WAN performance.

Of course I look forward to being corrected but for me the main difference is that in dedicated servers the client is a slave to the server and in cloud compute the client is king with assistance from the server.
 
3) Cloud augmentations. You play the game on your local games console, but when you connect it to the interwebs, it receives game content, like updated world happenings, maybe new terrain that was calculated in the cloud simulating environmental physics.

The applicable uses and restrictions for option 3 are what this thread is all about (there being another thread for talking about game steaming IIRC). With some ideas for shifting workload to the cloud, they end up making more sense just running an online server. eg. A racing game that computes car physics online and sends detailed data to the console to include in its local physics solver. It'd be quicker and easier to calculate the entire car physics/gameplay online and just send car position to the console for rendering. If online is too slow and laggy to support a good racing experience like this, it'd proper too slow and laggy to support physics augmentations.

Just a preamble that I'm not trying to argue and I'm ready to learn/discuss.

The exact same examples could be used to augment a single player game. The only difference I see is that it is the multiplayer server managing the forwarding of the cloud results and giving all the clients a time it would like to implement the augmentation (i.e. the time of detonation).

Option number 3 seems to be satisfied by my example. Unless someone can point to me where todays servers are actually pushing intricate physics calculations back for environment damage in any kind of scale. We have titan fall doing the AI in the cloud using distributed computing model. Why does the fact that it is the same calculation that can be pushed to multiple players consoles at one time matter? It just seems very efficient to me. If it was a single player game the only difference would be that the client gets the physics result directly. It really doesn't seem to make a difference whether the client is "hypothetically a master or slave etc.." as Lalaland was describing. Just as a simple thought exercise.. ie.. if the physics did an effect that "syncing issues would be ignored" it wouldn't matter if the client was relying on a server on it's own because the sync doesn't exist anyway. Ie. background planet explosions or something. It just runs once the client receives all the information no questions asked. Not a great example but the point is that the client being master/slave to server makes no difference to the fundamental example. Everyone will eventually see the background environment change.

The physics results represent something never before possible during a live multiplayer match with typical "dedicated servers" and it satisfies the lag/bandwidth/cost effective requirements.. I think that should be talked about! It gives you infinite map reconstruction possibilities during a single match! I'm pretty bored having the exact same pre canned "levolution" event taking place with no user input in battlefield.

Why not take advantage of having multiple users that will see these physics results? I'm not aware of any games that hypothetically could request an additional 80 cores for physics for brief periods on demand.

I do get what you are saying with dedicated servers already doing a lot of calculations server side and that distributed computing will allow this to go much further. If the tech demos and current games like Titanfall are examples then that is where they are heading. Microsofts asteroid demo was the same idea requiring even less frequent updates before the asteroids would go off course. Game economies, environments, NPCs, non time sensitive physics.. appear to all be perfect examples of cloud augmentation whether single player or multiplayer.
 
Back
Top