Server based game augmentations. The transition to cloud. Really possible?

the problem is that they are trying to simulate the flow of a city (good idea) with agents not smarter than ants (bad call) and an overly simplified model (i.e. the sims don't live/work/travel with a routine, they just go to the closest POI), they did a bunch of adjustment to tune the AI performance (including, reducing the amount of cars during rush hour). They were claiming there are a lot of computations offload onto the cloud, guess that's just an outright lie.

The game has its charms, but it's just simply broken.

Probably lots of computation offload "inter city", which was a catastrophe.
The cloud updates really slowly too. If you've tried to build a super structure that required (actually easily doable by one player who knows how to exploit income, but anyway) cooperation, you know it takes forever for it to change states and complete.


Plus
Who puts a T intersection and traffic signals on a highway???
Maxis :rolleyes:

(warning, large file)
https://drive.google.com/file/d/0B0DB02tYXggEaFRLVEY1bDhfZnM/edit?usp=sharing
 
Last edited by a moderator:
the problem is that they are trying to simulate the flow of a city (good idea) with agents not smarter than ants (bad call) and an overly simplified model (i.e. the sims don't live/work/travel with a routine, they just go to the closest POI), they did a bunch of adjustment to tune the AI performance (including, reducing the amount of cars during rush hour).
Let's not go insulting ants now. They are marvelously evolved creatures with specializations and innate behaviors that make them effective as individuals and resilient as a collective. They've also had millions of years for bug-fixing.

My original concerns about the Glassbox engine's reliance on generic agents for everything seem to have been validated.
For so many functions, even the stripped-down agents were excessive and injected unwanted behaviors into the simulation: see everything related to power, water, and sewage.
At the other end, agents were too generic to really produce sane behaviors, and the looming problem over any game that relies on emergent behavior is that tweaking things by fiddling with rules and poorly limited scope is indirect, usually unsatisfactory, and incredibly fragile.

There were mentions of exporting Glassbox to other games, which probably meant city simulation agents didn't have space in their design for city-specific traits that would interfere with the portability of said engine.

They were claiming there are a lot of computations offload onto the cloud, guess that's just an outright lie.
For things relevant to the player, or the furtherance of their enjoyment, almost certainly EA was spewing crap.
Analysis of the network traffic for the game broke down the broadcast city data to a limited set of data values corresponding to general totals for the city's various resources and utilities, as well as values serving as a snapshot for the known set of things that can be traded.
For that functionality, the server served primarily as a message box for neighboring players to (eventually) download and then synthesize into simulation behavior wholly locally.
There are global elements that should synthesize more general data for market prices and leaderboards and the like, but the server side's general dysfunction kept those off or broken for a very long time, and their impact on the game is somewhat limited or the game code explicitly overrode to keep them from wrecking the actual simulation (floors for various prices, for example), or they behaved in ways so nonsensical that to no style of play could use them.
I did see complaints that in some cases the poorly thought out design did let the global data influence things, normally when commodity prices and the frequently prohibitive space or specialist building costs rendered whole swaths of city-building choices as game-breaking.

The more in-depth speculation is that the servers spent a lot of their effort validating user inputs and the checkpoint saves--just frequently badly. Hence the problem with cities being declared invalid or rollbacks to arbitrary or nonsensical points in time. In that case, a vast amount of computation was there to make sure the player played the game in a way Maxis and EA declared acceptable, and then that code and their horrendous cloud implementation crapped the bed.

Given how poorly the server side has done, one would wonder if the trouble is in extracting out of that mess actually relevant functionality. Perhaps by the effort they might make it more functional, because so much of this disaster is completely the result of the unnecessary dislocation of data to a remote system.

Probably lots of computation offload "intra city", which was a catastrophe.
Intra city would mean computation within a plot, which the cloud does almost nothing but inject spurious values to make worse.

Intercity computation would be where the multiplayer aspect comes in, most of which has the data content and computational complexity of a few spreadsheet cells put into an email sent every few minutes, except slower.

The likely bigger load was the mass scale save file management and player action/DLC validation.
 
Article on why Titanfall is the big test of Microsoft's original cloud vision.
Maybe I misunderstood Titanfall but aren't the AI bots, which I'm guessing is the key code running on Azure's complete platform, just canon fodder? What other compute is done in the cloud? Accepting usual server hosting for communication between users.
 
Maybe I misunderstood Titanfall but aren't the AI bots, which I'm guessing is the key code running on Azure's complete platform, just canon fodder? What other compute is done in the cloud? Accepting usual server hosting for communication between users.

It's a good test of Azure's stability and scaling for a fairly large multiplayer launch. You're right in that the AI bots are pretty primitive. The Titan AI is pretty good when you have them in guard or follow mode, but the grunts are incredibly simple.

A good explanation of the grunts is that they're like the creeps in DOTA2. Instead of farming gold from them, you farm seconds off of your titan timer.
 
Being able to scale dynamically across multiple data centers over the globe based on live demand w/o the complexity of capacity planning and logistics is what the cloud's all about.
 
Being able to scale dynamically across multiple data centers over the globe based on live demand w/o the complexity of capacity planning and logistics is what the cloud's all about.

That would be the old cloud, the cloud we have been using for a long time.

The new thing, for games, though I actually thought that was how live was working, for titanfall is the dynamic server adaption. More people spawns more Servers.. No need to plan ahead, but I doubt that they didn't plan ahead. Either Azure isn't busy enough, or it is so big that titanfall hardly takes effort.

And it's far from the cloud power we hoped for.
 
That would be the old cloud, the cloud we have been using for a long time.

The new thing, for games, though I actually thought that was how live was working, for titanfall is the dynamic server adaption. More people spawns more Servers.. No need to plan ahead, but I doubt that they didn't plan ahead. Either Azure isn't busy enough, or it is so big that titanfall hardly takes effort.

...and that's different than what i said?
 
Last edited by a moderator:
Very interesting, did any of the side sessions go into detail about how this was done? Real time physics is something I thought too latency sensitive for offloading to the cloud.

Does anyone know which session this was from? I'd like to watch more to see if they elaborate. The YT channel has no info on which talk it is and I don't recognise the speakers.
 
It's a great, optimistic demo, but sadly not all that useful until we know the specifics. I suppose the data details can be kept secret if part of the development is finding ways to pack massive amounts of particulate data (Vec3 position + rotation for each block, assuming each particle piece is premodelled and not computed realtime requiring mesh data to be included) and this software solution will be part of MS's cloud advantage. 30,000 particles, 32 bit floats per value, would be 192 bits * 30k = 5.5 MB of data per frame, or needing a 170 Mbps connection. MS would need a way to condense that data into less than 1/10th to be viable for real users.

Two problems with this demo and understanding its application in real scenarios are the data connection in use (where these computers over the internet or in another room connected via Ethernet?), and that it wasn't a like for like comparison. Stationary particles won't need to be sent over the connection so the BW requirement is much reduced. They should have kept it a fair test.

What Phil Spencer said:
Absolutely, but that doesn't tell us the timeline. This cloud computing is going to happen. Just in 1 year, or 3, or 5, or 10, or 20? Pretty demos fail to address the very obvious technical limitations, and MS continue to not explain how they solve these. Given that this was just an aside, I don't think the tech is at all ready and MS aren't approaching devs to use this. It's just a glimpse of what the cloud can do in some idealised connection. MS will need to explain how this is applicable to real-world internet connections if such demos are realistic.
 
Back
Top