The cloud is just over the horizon [2018-2019]

I think Shifty's point or question is "If Crackdown is using Unreal Engine by Epic and Epic acquired Cloudgine, won't it be built into Unreal Engine and thus still usable for the game?"
It obviously would if it was part of the engine but as of today it is still not implemented in it. MS does have it's own internal branch of UE4 for its 1st party studios but in this case Crackdown is an outliner as it has always been developed by a third party studio (well several studios now given the mess that it has been) and the Cloudgine tech used in the game wasn't owned by MS (they own the game's IP and not much else). Cloudgine (via Reagent wichywqs a sub or Cloudgine) was supposedly only tasked to do consulting and implement their tech cloud into the game) A dev postmortem once the game is out would be fascinating..
 
Last edited:
I don't see the logic here though. MS were using Cloudgine for Crackdown using Unreal. That was presumably contracted. Epic buy Cloudgine. Why would MS lose anything on account of this acquisition? Epic pulled Cloudgine from under MS's feet, and despite the prominent collaboration with MS and Epic thanking each other for their partnership, MS were left unable to use their netcode and had to rewrite without conflicting with the Cloudgine tech already used that now wasn't licensed?
 
I don't see the logic here though. MS were using Cloudgine for Crackdown using Unreal. That was presumably contracted. Epic buy Cloudgine. Why would MS lose anything on account of this acquisition? Epic pulled Cloudgine from under MS's feet, and despite the prominent collaboration with MS and Epic thanking each other for their partnership, MS were left unable to use their netcode and had to rewrite without conflicting with the Cloudgine tech already used that now wasn't licensed?
Yea that part wasn’t clear to me either. But I’m pretty sure that’s what I read in an interview. Will need to look around for it
 
And the fact MS hired the cloudgine guys as a one-off job for this one title and let them be bought off by epic goes to show the actual level of commitment to "Teh Cloud [emoji769]" They were more than happy to have just one title to use cloud augmented physics for the sake of proving their marketing BS wasn't BS. Kind of proves not even they believed in the value of that tech.
 
So what I could find on the topic:
So this is probably where I read that it was 'rebuilt'

Honestly though, looking into it, there could have been a variety of reasons they needed to restart after the buyout, it being somewhat but not directly related to the buyout of cloudgine. Given how Matt Booty sort of side skirts the questions there on whether they are still using cloudgine or went with their own, i'm willing to believe there is more to the story.

Stone noted how Microsoft's on-going investments in datacenters and Azure hardware upgrades have helped mitigate some of the potential issues.

There are some problems that we would have had in the past, that we don't have anymore. There used to be regions where we just had unacceptable ping times. It doesn't happen anymore. We were worried about population density, from an Xbox install base perspective, so we had to think about transferring server control from one data center to another. Investment in data centers has solved that.
So perhaps the reduction in physics may also have to do with meeting the lowest common denominator server side and the original demos were probably using more than what would be available to players

https://www.onmsft.com/news/xbox-ones-crackdown-3-loses-co-developer-and-original-creator
Speaking with Polygon at E3 this year, head of Microsoft Studios Matt Booty didn’t provide a clear answer on whether Microsoft had to abandon Cloudgine’s technology in favor of an in-house alternative. “You know, I’m not going to get into the actual technical breakdown,” the exec said. “Let’s just say that we’ve got access to a great infrastructure, and the game’s got some great tech in it, and we’re going to put those two together in the way that makes the most sense.”

https://wccftech.com/crackdown-3-sumo-digital-multiplayer/
This led to the shutdown of Reagent as Jones opted to dedicate his full time to the Epic-owned Cloudgine. Jones/Reagent and Cloudgine were working on an innovative multiplayer mode for Crackdown 3, that was said to use cloud processing to track real-time, persistent explosions and destruction. So, is the cloud-powered multiplayer mode dead? Not necessarily says Booty…
“[Multiplayer] is still part of [Crackdown 3]. You know, we’re super lucky as part of Microsoft that we get to work so closely with the Xbox platform team, that the cloud shows up in all of our games in pretty exciting ways. […] We’ve got access to a great infrastructure, and the game’s got some great tech in it, and we’re going to put those two together in the way that makes the most sense.”
 
And the fact MS hired the cloudgine guys as a one-off job for this one title and let them be bought off by epic goes to show the actual level of commitment to "Teh Cloud [emoji769]" They were more than happy to have just one title to use cloud augmented physics for the sake of proving their marketing BS wasn't BS. Kind of proves not even they believed in the value of that tech.
I think its fair to say they built a solution in which a problem didn't exist and then they wanted to show case it somewhere. But it was also a different time, MS was still finding their footing as a whole as Satya took over and made changes. He's only been in charge since 2015, so it's already been a dramatic shift. it's entirely possible that the desire to own their own in-house solution that does this had larger implications for more than games, and I imagine there was a degree of wondering why they didn't buy out Cloudgine themselves (I imagine their strategy wasn't sorted yet). But as some articles write, there wasn't enough azure power to serve absolutely everyone with Crackdown 3, and I think that's where they ran into a lot of issues. But the constant investments in cloud is making this more of a reality than it was before.
Still curious to see the end product before i judge it.
 
So perhaps the reduction in physics may also have to do with meeting the lowest common denominator server side and the original demos were probably using more than what would be available to players
Well sure. The demos were only ever running on LAN, as we criticsed at the time that it wasn't at all representative of what could be achieved over WAN. The would be reduction in physics bringing the game in line with other games just means there's nothing on MS's end pushing the cloud beyond what was already happening. The incredible 'power of the cloud' is and always was severly hampered by the internet infrastructure, and where Crackdown 3 suggested clever solutions to this (we discussed at length alternative representations of that dense data to make it possible), it looks like that hasn't happened.

Eventually there'll be super-fast 100+ megabit connections to very local servers and MS will be at the forefront of that. There's probably some places in Asia where the Crackdown demo's could be realised to the same sort of scale. That's still years off to being any sort of norm and isn't happening this generation despite MS's claims to the contrary.
 
Well sure. The demos were only ever running on LAN, as we criticsed at the time that it wasn't at all representative of what could be achieved over WAN. The would be reduction in physics bringing the game in line with other games just means there's nothing on MS's end pushing the cloud beyond what was already happening. The incredible 'power of the cloud' is and always was severly hampered by the internet infrastructure, and where Crackdown 3 suggested clever solutions to this (we discussed at length alternative representations of that dense data to make it possible), it looks like that hasn't happened.

Eventually there'll be super-fast 100+ megabit connections to very local servers and MS will be at the forefront of that. There's probably some places in Asia where the Crackdown demo's could be realised to the same sort of scale. That's still years off to being any sort of norm and isn't happening this generation despite MS's claims to the contrary.
I get where you are going, but I'm not sure if it's bandwidth or available processing power that they need to pull this off. I'm thinking it's the latter because we're going to get a range of connections, but online processing power has got to be somewhat localized, and even though Azure is global, there's bound to be some areas where they have significantly less resources than others.
 
I get where you are going, but I'm not sure if it's bandwidth or available processing power that they need to pull this off.
It's bandwidth, latency and power. Only one of these is within the cloud provider's control.
 
  • Like
Reactions: JPT
Power is easy to fix, it's just throw money at it for MS.

Bandwidth is harder, because the people buying a streaming console probably has price as a major reason. IE they will probably not pay to upgrade bandwidth. And it has diminishing returns.

Latency, with some network knowledge can be coped with to a certain degree.

But if there is 10ms latency between you and the server, well you can not do much about it. It's just what kind of trade of you are willing to make. 10ms is about 1/3 or 2/3 of a frame for just moving data. You can predict more / further ahead which increases probability of errors in predictions. You can bulk up more data in each shipment, adds latency. You can drop amount of data in each shipment, means less processing can be offloaded.
These are not unknowns in mp games, but now you are trying to offloaded added compute and the price is latency.
Not much you can do about time, unless next generation consoles includes time warps or black holes or
 
Last edited:
I get where you are going, but I'm not sure if it's bandwidth...
It's worth noting that the first rule of net data is keep updates to a minimum - as little data as possible, as infrequently as possible. Apparently games at the moment peak at 100 MB per hour for the most intensive games, so ~30 kB/second, 240 kbps. The average internet connection, even the slowest internet connection, is far, far faster than 240 kbps but that's still what games are made to.
 
It's worth noting that the first rule of net data is keep updates to a minimum - as little data as possible, as infrequently as possible. Apparently games at the moment peak at 100 MB per hour for the most intensive games, so ~30 kB/second, 240 kbps. The average internet connection, even the slowest internet connection, is far, far faster than 240 kbps but that's still what games are made to.
I suspect wherever they can serve Project xCloud they should be able to serve this. I can't see the requirements of this being higher than a pure streaming solution.

Once again, I don't know what 'improvements to infrastructure' entails. If crackdown cloud servers require multiple server to serve a single instance, like say 6 servers for 6 zones, i think you're looking at an infrastructure issue than you are a bandwidth latency issue. I'm not saying that bandwidth and latency aren't at play here, but looking at the solution they devised, and how many players they intend to support (most gamepass games net 1 million at launch) you're going to need a metric ton of 6 server clusters to support that many instances of multiplayer globally.

just to reiterate the quote
There are some problems that we would have had in the past, that we don't have anymore. There used to be regions where we just had unacceptable ping times. It doesn't happen anymore. We were worried about population density, from an Xbox install base perspective, so we had to think about transferring server control from one data center to another. Investment in data centers has solved that.
 
Last edited:
I suspect wherever they can serve Project xCloud they should be able to serve this. I can't see the requirements of this being higher than a pure streaming solution.

Once again, I don't know what 'improvements to infrastructure' entails. If crackdown cloud servers require multiple server to serve a single instance, like say 6 servers for 6 zones, i think you're looking at an infrastructure issue than you are a bandwidth latency issue. I'm not saying that bandwidth and latency aren't at play here, but looking at the solution they devised, and how many players they intend to support (most gamepass games net 1 million at launch) you're going to need a metric ton of 6 server clusters to support that many instances of multiplayer globally.

just to reiterate the quote

Processing power is the easiest thing to fix and I read that quote as that what they did. And they helped with the latency by adding more datacenter locations closer to the end-user.

I am working on SaaS project and even for our low-key and very resource limited project (compared to XBox at least), we can throw hardware on anything that needs extra power for processing/storage.
The hard bit for us, is the network and we are network engineers :D
 
Processing power is the easiest thing to fix and I read that quote as that what they did. And they helped with the latency by adding more datacenter locations closer to the end-user.

I am working on SaaS project and even for our low-key and very resource limited project (compared to XBox at least), we can throw hardware on anything that needs extra power for processing/storage.
The hard bit for us, is the network and we are network engineers :D
Scale is what makes it challenging, I don't think it's just an issue of having more power. Having more power at the scale MS needs to supply both games like this and to supply their standard azure clients is what makes it difficult right?
 
Scale is what makes it challenging, I don't think it's just an issue of having more power. Having more power at the scale MS needs to supply both games like this and to supply their standard azure clients is what makes it difficult right?

You Scale to achieve having enough Power, you can scale horizontally or vertically, but to get enough power you need to scale horizontally imo. We scale horizontally, by adding more servers to the farm. And we run a big ass load balancer setup in front to make sure each drop of ram and cpu is available :) And if we need more, we chuck on more servers and add them to our load balancing pool.

The load balancer and the network to support is then what we need to deal with. Which is a big enough headache for the measly 1M ish devices we manage on it and we cant push this stuff to amazon, cloudfare, google etc due to privacy restrictions.

Things like orchestration of the servers and so fort is of course a challenge, but that is software.
 
You Scale to achieve having enough Power, you can scale horizontally or vertically, but to get enough power you need to scale horizontally imo. We scale horizontally, by adding more servers to the farm. And we run a big ass load balancer setup in front to make sure each drop of ram and cpu is available :) And if we need more, we chuck on more servers and add them to our load balancing pool.

The load balancer and the network to support is then what we need to deal with. Which is a big enough headache for the measly 1M ish devices we manage on it and we cant push this stuff to amazon, cloudfare, google etc due to privacy restrictions.

Things like orchestration of the servers and so fort is of course a challenge, but that is software.
lol well yea I'm sure it's not trivial either for your company, as much as it is the most straight forward route, there are real-estate challenges, power challenges or you'e exceeding the cooling provided by the data centre, financial challenges etc. I think for MS, they amount they invest, $1B per month on data centres, even at that rate, it's still taking them 3+ years to continue to build out where they want to go.
 
lol well yea I'm sure it's not trivial either for your company, as much as it is the most straight forward route, there are real-estate challenges, power challenges or you'e exceeding the cooling provided by the data centre, financial challenges etc. I think for MS, they amount they invest, $1B per month on data centres, even at that rate, it's still taking them 3+ years to continue to build out where they want to go.

That is sort of the point, unless MS puts Azure hardware on my LAN. Then they have to overcome bandwidth and latency challenges. They can overcome compute resources by adding more hardware, probably not cost effective if they put Azure hardware in every home.

Thats why, imo, streaming has very hard limitations.

This is OLD, but here is what Carmack wrote about it a lifetime ago

http://fabiensanglard.net/quakeSource/johnc-log.aug.htm

more fun stuff here

http://fabiensanglard.net/quakeSource/quakeSourcePrediction.php

His numbers/targets are not the same today as then. Ie 200 ms latency and fps of 15 etc. But you still need to do everything he writes about and in a very short time period.
 
Last edited:
That is sort of the point, unless MS puts Azure hardware on my LAN. Then they have to overcome bandwidth and latency challenges.

For a second, or 60 x 16.67ms, let's imagine Microsoft could put an Azure server on your LAN. Fundamentally your engine and renderer, still have to manage some things being computed locally with controller lag from the player, local engine / simulation / compute lag, rendering lag, and everything you've offloaded which is x milliseconds behind because you still need to package up data, throw to the TCP/IP stack, have it race across a network, reach the Azure server, have to compute whatever it's computing, package up that data, throw it back across the network and back to the engine.

Fundamentally, and has been covered many times in the thread. I think think the biggest challenge is identifying what kind of high-compute problems you can even offload and where it doesn't matter if it is x number of frames behind the local game engine or where it's so predictable that the lag can be masked because you're asking for things to be calculated in advance, such as a firing off a relatively slow-moving explosive device at a distant structure because you know the device's speed, trajectory and likelihood of intercept you can start calculating the destruction before it hits.
 
LAN performance of the cloud has already been demoed with the early Crackdown 3 examples. IIRC they were just running the physics on a PC and streaming. Was it an i7? Did we ever get details? Performance isn't an issue - here's an interview talking about the challenge of getting physics calculations distributed across servers. It's data that's the problem...

"Much has been made about the power of Microsoft’s Azure servers but we’ve only seen small benefits from the same (as seen in Forza 5’s Drivatar AI and Titanfall’s reliance on Azure). Crackdown 3 takes things to a completely different level though in terms of compute scale. How challenging was it to deliver on the demands of Crackdown 3?

With Crackdown 3, we focused on the hardest problem first: the distribution of a very complex physics simulation. Physics distribution comes with a long list of challenges: How to split the cost of a single physics simulation across multiple servers? How to minimize the inevitable latency introduced by the distribution? How to scale the system to use compute power on demand? And more importantly — once we solved the problem of simulating a huge number of physical objects in our cloud platform, how to send their state to an Xbox One through a reasonably low bandwidth (2Mbps – 4Mbps) internet connection?"
They were targeting 8kb per 60fps frame back then. The latest trailer suggests this limit meant using far simpler data, which is not at all a problem with processing power (save the economies of giving millions of users powerful processors to run their games) - predetermined pieces and fewer of them and limited persistence. Real gameplay with maximised destruction might scale that up; we'll have to see.
 
Back
Top