The cloud is just over the horizon [2018-2019]

Discussion in 'Console Technology' started by Lalaland, Jul 16, 2018.

  1. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,607
    Likes Received:
    1,161
    It obviously would if it was part of the engine but as of today it is still not implemented in it. MS does have it's own internal branch of UE4 for its 1st party studios but in this case Crackdown is an outliner as it has always been developed by a third party studio (well several studios now given the mess that it has been) and the Cloudgine tech used in the game wasn't owned by MS (they own the game's IP and not much else). Cloudgine (via Reagent wichywqs a sub or Cloudgine) was supposedly only tasked to do consulting and implement their tech cloud into the game) A dev postmortem once the game is out would be fascinating..
     
    #41 Ike Turner, Dec 22, 2018
    Last edited: Dec 22, 2018
    egoless likes this.
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,581
    Likes Received:
    9,605
    Location:
    Under my bridge
    I don't see the logic here though. MS were using Cloudgine for Crackdown using Unreal. That was presumably contracted. Epic buy Cloudgine. Why would MS lose anything on account of this acquisition? Epic pulled Cloudgine from under MS's feet, and despite the prominent collaboration with MS and Epic thanking each other for their partnership, MS were left unable to use their netcode and had to rewrite without conflicting with the Cloudgine tech already used that now wasn't licensed?
     
    milk likes this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    Yea that part wasn’t clear to me either. But I’m pretty sure that’s what I read in an interview. Will need to look around for it
     
    egoless likes this.
  4. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,605
    Likes Received:
    2,051
    This game's development was so messy they couldn't even get a contract written right!
     
    egoless likes this.
  5. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,605
    Likes Received:
    2,051
    And the fact MS hired the cloudgine guys as a one-off job for this one title and let them be bought off by epic goes to show the actual level of commitment to "Teh Cloud " They were more than happy to have just one title to use cloud augmented physics for the sake of proving their marketing BS wasn't BS. Kind of proves not even they believed in the value of that tech.
     
    lefantome and egoless like this.
  6. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    So what I could find on the topic:
    So this is probably where I read that it was 'rebuilt'


    Honestly though, looking into it, there could have been a variety of reasons they needed to restart after the buyout, it being somewhat but not directly related to the buyout of cloudgine. Given how Matt Booty sort of side skirts the questions there on whether they are still using cloudgine or went with their own, i'm willing to believe there is more to the story.

    So perhaps the reduction in physics may also have to do with meeting the lowest common denominator server side and the original demos were probably using more than what would be available to players

    https://www.onmsft.com/news/xbox-ones-crackdown-3-loses-co-developer-and-original-creator
    https://wccftech.com/crackdown-3-sumo-digital-multiplayer/
     
    BRiT likes this.
  7. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    I think its fair to say they built a solution in which a problem didn't exist and then they wanted to show case it somewhere. But it was also a different time, MS was still finding their footing as a whole as Satya took over and made changes. He's only been in charge since 2015, so it's already been a dramatic shift. it's entirely possible that the desire to own their own in-house solution that does this had larger implications for more than games, and I imagine there was a degree of wondering why they didn't buy out Cloudgine themselves (I imagine their strategy wasn't sorted yet). But as some articles write, there wasn't enough azure power to serve absolutely everyone with Crackdown 3, and I think that's where they ran into a lot of issues. But the constant investments in cloud is making this more of a reality than it was before.
    Still curious to see the end product before i judge it.
     
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,581
    Likes Received:
    9,605
    Location:
    Under my bridge
    Well sure. The demos were only ever running on LAN, as we criticsed at the time that it wasn't at all representative of what could be achieved over WAN. The would be reduction in physics bringing the game in line with other games just means there's nothing on MS's end pushing the cloud beyond what was already happening. The incredible 'power of the cloud' is and always was severly hampered by the internet infrastructure, and where Crackdown 3 suggested clever solutions to this (we discussed at length alternative representations of that dense data to make it possible), it looks like that hasn't happened.

    Eventually there'll be super-fast 100+ megabit connections to very local servers and MS will be at the forefront of that. There's probably some places in Asia where the Crackdown demo's could be realised to the same sort of scale. That's still years off to being any sort of norm and isn't happening this generation despite MS's claims to the contrary.
     
  9. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    I get where you are going, but I'm not sure if it's bandwidth or available processing power that they need to pull this off. I'm thinking it's the latter because we're going to get a range of connections, but online processing power has got to be somewhat localized, and even though Azure is global, there's bound to be some areas where they have significantly less resources than others.
     
  10. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    10,431
    Likes Received:
    5,190
    Location:
    London, UK
    It's bandwidth, latency and power. Only one of these is within the cloud provider's control.
     
    JPT likes this.
  11. JPT

    JPT
    Veteran

    Joined:
    Apr 15, 2007
    Messages:
    1,881
    Likes Received:
    261
    Location:
    Oslo, Norway
    Power is easy to fix, it's just throw money at it for MS.

    Bandwidth is harder, because the people buying a streaming console probably has price as a major reason. IE they will probably not pay to upgrade bandwidth. And it has diminishing returns.

    Latency, with some network knowledge can be coped with to a certain degree.

    But if there is 10ms latency between you and the server, well you can not do much about it. It's just what kind of trade of you are willing to make. 10ms is about 1/3 or 2/3 of a frame for just moving data. You can predict more / further ahead which increases probability of errors in predictions. You can bulk up more data in each shipment, adds latency. You can drop amount of data in each shipment, means less processing can be offloaded.
    These are not unknowns in mp games, but now you are trying to offloaded added compute and the price is latency.
    Not much you can do about time, unless next generation consoles includes time warps or black holes or
     
    #51 JPT, Dec 23, 2018
    Last edited: Dec 23, 2018
    egoless likes this.
  12. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,581
    Likes Received:
    9,605
    Location:
    Under my bridge
    It's worth noting that the first rule of net data is keep updates to a minimum - as little data as possible, as infrequently as possible. Apparently games at the moment peak at 100 MB per hour for the most intensive games, so ~30 kB/second, 240 kbps. The average internet connection, even the slowest internet connection, is far, far faster than 240 kbps but that's still what games are made to.
     
  13. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    I suspect wherever they can serve Project xCloud they should be able to serve this. I can't see the requirements of this being higher than a pure streaming solution.

    Once again, I don't know what 'improvements to infrastructure' entails. If crackdown cloud servers require multiple server to serve a single instance, like say 6 servers for 6 zones, i think you're looking at an infrastructure issue than you are a bandwidth latency issue. I'm not saying that bandwidth and latency aren't at play here, but looking at the solution they devised, and how many players they intend to support (most gamepass games net 1 million at launch) you're going to need a metric ton of 6 server clusters to support that many instances of multiplayer globally.

    just to reiterate the quote
     
    #53 iroboto, Dec 23, 2018
    Last edited: Dec 23, 2018
  14. JPT

    JPT
    Veteran

    Joined:
    Apr 15, 2007
    Messages:
    1,881
    Likes Received:
    261
    Location:
    Oslo, Norway
    Processing power is the easiest thing to fix and I read that quote as that what they did. And they helped with the latency by adding more datacenter locations closer to the end-user.

    I am working on SaaS project and even for our low-key and very resource limited project (compared to XBox at least), we can throw hardware on anything that needs extra power for processing/storage.
    The hard bit for us, is the network and we are network engineers :D
     
    egoless likes this.
  15. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    Scale is what makes it challenging, I don't think it's just an issue of having more power. Having more power at the scale MS needs to supply both games like this and to supply their standard azure clients is what makes it difficult right?
     
    milk likes this.
  16. JPT

    JPT
    Veteran

    Joined:
    Apr 15, 2007
    Messages:
    1,881
    Likes Received:
    261
    Location:
    Oslo, Norway
    You Scale to achieve having enough Power, you can scale horizontally or vertically, but to get enough power you need to scale horizontally imo. We scale horizontally, by adding more servers to the farm. And we run a big ass load balancer setup in front to make sure each drop of ram and cpu is available :) And if we need more, we chuck on more servers and add them to our load balancing pool.

    The load balancer and the network to support is then what we need to deal with. Which is a big enough headache for the measly 1M ish devices we manage on it and we cant push this stuff to amazon, cloudfare, google etc due to privacy restrictions.

    Things like orchestration of the servers and so fort is of course a challenge, but that is software.
     
    egoless likes this.
  17. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    6,937
    Likes Received:
    5,221
    lol well yea I'm sure it's not trivial either for your company, as much as it is the most straight forward route, there are real-estate challenges, power challenges or you'e exceeding the cooling provided by the data centre, financial challenges etc. I think for MS, they amount they invest, $1B per month on data centres, even at that rate, it's still taking them 3+ years to continue to build out where they want to go.
     
  18. JPT

    JPT
    Veteran

    Joined:
    Apr 15, 2007
    Messages:
    1,881
    Likes Received:
    261
    Location:
    Oslo, Norway
    That is sort of the point, unless MS puts Azure hardware on my LAN. Then they have to overcome bandwidth and latency challenges. They can overcome compute resources by adding more hardware, probably not cost effective if they put Azure hardware in every home.

    Thats why, imo, streaming has very hard limitations.

    This is OLD, but here is what Carmack wrote about it a lifetime ago

    http://fabiensanglard.net/quakeSource/johnc-log.aug.htm

    more fun stuff here

    http://fabiensanglard.net/quakeSource/quakeSourcePrediction.php

    His numbers/targets are not the same today as then. Ie 200 ms latency and fps of 15 etc. But you still need to do everything he writes about and in a very short time period.
     
    #58 JPT, Dec 24, 2018
    Last edited: Dec 24, 2018
    DSoup, iroboto and egoless like this.
  19. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    10,431
    Likes Received:
    5,190
    Location:
    London, UK
    For a second, or 60 x 16.67ms, let's imagine Microsoft could put an Azure server on your LAN. Fundamentally your engine and renderer, still have to manage some things being computed locally with controller lag from the player, local engine / simulation / compute lag, rendering lag, and everything you've offloaded which is x milliseconds behind because you still need to package up data, throw to the TCP/IP stack, have it race across a network, reach the Azure server, have to compute whatever it's computing, package up that data, throw it back across the network and back to the engine.

    Fundamentally, and has been covered many times in the thread. I think think the biggest challenge is identifying what kind of high-compute problems you can even offload and where it doesn't matter if it is x number of frames behind the local game engine or where it's so predictable that the lag can be masked because you're asking for things to be calculated in advance, such as a firing off a relatively slow-moving explosive device at a distant structure because you know the device's speed, trajectory and likelihood of intercept you can start calculating the destruction before it hits.
     
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,581
    Likes Received:
    9,605
    Location:
    Under my bridge
    LAN performance of the cloud has already been demoed with the early Crackdown 3 examples. IIRC they were just running the physics on a PC and streaming. Was it an i7? Did we ever get details? Performance isn't an issue - here's an interview talking about the challenge of getting physics calculations distributed across servers. It's data that's the problem...

    "Much has been made about the power of Microsoft’s Azure servers but we’ve only seen small benefits from the same (as seen in Forza 5’s Drivatar AI and Titanfall’s reliance on Azure). Crackdown 3 takes things to a completely different level though in terms of compute scale. How challenging was it to deliver on the demands of Crackdown 3?

    With Crackdown 3, we focused on the hardest problem first: the distribution of a very complex physics simulation. Physics distribution comes with a long list of challenges: How to split the cost of a single physics simulation across multiple servers? How to minimize the inevitable latency introduced by the distribution? How to scale the system to use compute power on demand? And more importantly — once we solved the problem of simulating a huge number of physical objects in our cloud platform, how to send their state to an Xbox One through a reasonably low bandwidth (2Mbps – 4Mbps) internet connection?"
    They were targeting 8kb per 60fps frame back then. The latest trailer suggests this limit meant using far simpler data, which is not at all a problem with processing power (save the economies of giving millions of users powerful processors to run their games) - predetermined pieces and fewer of them and limited persistence. Real gameplay with maximised destruction might scale that up; we'll have to see.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...