Server based game augmentations. The transition to cloud. Really possible?

How this all jives together is still a bit of a mystery but just to keep what little is left of my mind clear on the subject we have 2 "demos" the Build one and the Crackdown Trailer. The first one shows projectiles hitting specific spots and the camera ranging around the game world such as it is and continuing to shoot stuff. I am under the assumption that all of the physics going on in the first demo is all done in the Cloud and that it could be doing what Spencer discusses here:

just having the local box [the player's console] get the positional data and the render, so, 'Okay I need to render this piece at this particular location. I don't know why.' The local box doesn't know why it's going to be at this location or where it's going to be in the next frame. That's all held in the cloud. You have this going back and forth between the two

and that something like that is what was seen in the Crackdown trailer potentially. Now looking at the trailer one could suppose that a potential Cloud destruction event could be triggered by say shooting some mega bazooka at specific points on the building creating a collapse specific to those patterns of shots and that the resulting collapse will be rendered based on the stream of "location and render" data but we don't the extent of the game world that would be affected by it. Meaning the building could collapse but how much would the debris affect the rest of the world near it.

One question would be will there be restrictions on where the player can roam while shooting say like on rails or will it be free form like in the original demo. Will certain building be blowupable and others not and other restrictions.

Last but not least based on the idea of a stream of "positional data and renders" I take it it all has to be processed through the CPU since it is generating the draw calls to be sent to the GPU. So the CPU will be getting this stream of Cloud data, generating draw calls from it and combining that with the calls for the rest of the world that the CPU is responsible for. Maybe throw in some GPGPU action just cause :)
 
Last edited by a moderator:
The first one shows projectiles hitting specific spots and the camera ranging around the game world such as it is and continuing to shoot stuff.

The bazooka/rocket launcher that is controlled by the presenter is hitting random locations. IE - any location on that building is fair game and can be shot with the gun thus triggering random explosions and new physics calculations.

Later, in order to put a large amount of stress on the physics simulation they triggered pre-placed rigged explosions so that there would be more simultaneous debris and physics calculations than what was possible just shooting a rocket launcher, at least the slow firing speed one they were using.

Regards,
SB
 
Would it make sense for a game like No Man's Sky to use cloud based procedural generation of worlds and creatures?

Some of the devs were on Giant Bomb's late night live show this week and they talked some about kind of spot checking the results of the procedural generation. It was put in terms of visiting worlds in game, getting 3 blue skies in a row and then going to their big database of planets to get a sky-high view of the galaxy and see there aren't in fact too many blue sky worlds being generated. They have this visualized on a big screen with animated thumbnails of every visited planet. Unfortunately, from the discussion it wasn't clear if the planet generation occurs server side, or on demand within the client. Either way there is an implication of needing to communicate with a server either to retrieve the planet generation's seed, or to upload the same. I suspect the procedural generation happens client side so the central server's only responsibility is storing and passing on new systems and planets to other players. That way it should scale quickly when the game goes live with minimal server stress.
 
Would it make sense for a game like No Man's Sky to use cloud based procedural generation of worlds and creatures?

Considering the fact that the whole universe is a persistent online world, I think they would be using a well controlled deterministic algorithm to generate everything. They will at least need to store an encyclopedia of who discovered what species in the world, but beyond that I think the local machines are more than capable enough to generate the world from that base set of data and algorithms, and finely engraining that into your graphics pipeline I think could be fare more efficient.

As for Crackdown, I think the economic viability argument is very important, and I think such a game could only work for Live Gold members, and preferably for online only games, as there gamers can be expected to be online anyway - but doesn't have to, just a new category of online that users need to get used to. And they give you a guaranteed income that could be enough to pay for this.

The same fixed fee that pays for the online resources gets more potent as time goes on and servers upgrade - you always get more power for your money over time, so that games don't necessarily have to be end-of-lifed. Even if costs don't come down/capacity doesn't go up, as gamers have a fixed, private set of server resources and will only be playing one game at a time, it shouldn't matter.

In short, I am certain it can work, and it will be most interesting in managing an online persistent world with physics. You could have a universe that can be interacted with, altered, destroyed, built up, much like Minecraft, but taking that a lot further. But if that world needs to involve on its own, like trees and plants growing, the overhead becomes independent to the player base, and that could be a danger to the game's longevity - not enough players means those resources could eventually become too expensive.
 
As for Crackdown, I think the economic viability argument is very important...
It is, but in this thread we should just look at the technical requirements to get it working. Devs being able to afford to implement it in the cloud, once a technical solution has been found, can be discussed in the non-technical Console forum.
 
What I found a bit light is the network side of it.

Yes, packet loss is not that high anymore, because we buffer the hell out of the network, which means we are adding jitter and increased rtt.

Now as for the wifi, I do not care how great your wifi is, unless your alone where you live, then you are having issues. But wifi also buffers like hell, so you might not experience the packet drop, because its retransmitted between the AP and STA without your knowledge. Adding jitter and more RTT time.

This extra jitter and rtt might not be an issue and it has been accounted for, but if its not, well.
 
The affect of the network connection on physics can't be any worse than the same on multiplayer. Cloud physics should thus be of a similar quality relative to amount of data, with physics being affected by the occasional glitch. Although if the physics are entirely server-based, there won't be any local correction or jumps, so the worst you'll get is a stutter in the falling debris while waiting for the most recent frame of data.
 
What I found a bit light is the network side of it.

Yes, packet loss is not that high anymore, because we buffer the hell out of the network, which means we are adding jitter and increased rtt.

Now as for the wifi, I do not care how great your wifi is, unless your alone where you live, then you are having issues. But wifi also buffers like hell, so you might not experience the packet drop, because its retransmitted between the AP and STA without your knowledge. Adding jitter and more RTT time.

This extra jitter and rtt might not be an issue and it has been accounted for, but if its not, well.

So very, very this. The rise of ridiculous buffers in ISP class switching gear has been a major issue in delivering smooth remote computing (RDP, ICA, PCoIP). Particularly over WWAN (HSPA, LTE) hell even using my own wifi network at home for Steam Streaming shows the challenges of that and both endpoints are within 50 feet of each other.

The affect of the network connection on physics can't be any worse than the same on multiplayer. Cloud physics should thus be of a similar quality relative to amount of data, with physics being affected by the occasional glitch. Although if the physics are entirely server-based, there won't be any local correction or jumps, so the worst you'll get is a stutter in the falling debris while waiting for the most recent frame of data.

I'd see it as more impactful to be honest, in BF4 the 'levolution' or even just collapsing buildings in previous games were always prone to killing you without warning as the client caught up with the server physics. Imagine dealing with that while racing away from bad guys as they knock chunks out of the walls? It's not impossible but I can see it restricting the tech to very limited controlled circumstances such as say restricting building collapse to certain classes of explosive (ie C4 collapses structural supports and calls on the cloud, grenades just break glass locally).
 
Last edited by a moderator:
So very, very this. The rise of ridiculous buffers in ISP class switching gear has been a major issue in delivering smooth remote computing (RDP, ICA, PCoIP). Particularly over WWAN (HSPA, LTE) hell even using my own wifi network at home for Steam Streaming shows the challenges of that and both endpoints are within 50 feet of each other.

No there isn't, buffers vs throughput on highend switches has remained pretty consistent all the way from 100Mbps to 100Gbps. you also only hit buffers on congestion. buffers are good, TCP is inherently mirco bursty. you dont want to be dropping packets for 500us on a links that otherwise is operating @ 70% utilization over a 5 second sample and thats what would happen without buffers.
 
No there isn't, buffers vs throughput on highend switches has remained pretty consistent all the way from 100Mbps to 100Gbps. you also only hit buffers on congestion. buffers are good, TCP is inherently mirco busty. you dont want to be dropping packets for 500us on a links that otherwise is operating @ 70% utilization over a 5 second sample and thats what would happen without buffers.

Buffers are fine on the switches on customer sites as they're usually appropriately sized (the switches not the buffers) but when it comes to having to wander the wider reaches things get messy. Particularly for real time services such as VoIP within a VDI session or the like, where stuttery delivery has sunk more than a few VDI projects I've worked on. Blaming switch buffers as opposed to simple congestion may have been over simplifying things but additional latency is no friend to real time services.
 
Buffers are fine on the switches on customer sites as they're usually appropriately sized (the switches not the buffers) but when it comes to having to wander the wider reaches things get messy. Particularly for real time services such as VoIP within a VDI session or the like, where stuttery delivery has sunk more than a few VDI projects I've worked on. Blaming switch buffers as opposed to simple congestion may have been over simplifying things but additional latency is no friend to real time services.

No, sry this is wrong. If your hitting congestion on a consistent basis and its breaking your applications then your network is designed wrong, generally speaking buffers and QOS is about what traffic you sacrifice in the name of other traffic. That why things like weight deficit round robin, LLQ, PQ etc queuing methods exist. On the internet your not doing QOS, buffers just provide a FIFO queue to store packets for as i said, handling the inherently busty nature of TCP. You then have to consider the nature of distributed switch architectures which can lead to massive oversubscriptions and that lands in the egress queue. Would you rather you just dropped those packets on the backplane or do you want as a consumer to be paying for ~20-30 times more expensive hardware for centrally processed network devices ( rough price per port between a DC switch like arista/cisco and a cisco ASR equivalent port)

if you look across switches and routers from 100mb to 100gb interfaces you will see a standard target of 10-15ms.


there isn't some magical customer/carrier boundary here. If we are just talking about internet there is no intelligent QOS only FIFO, so if your flow is being buffered significantly it is also likely being dropped. If you didn't have buffers then you would just be dropped more.
 
No, sry this is wrong. If your hitting congestion on a consistent basis and its breaking your applications then your network is designed wrong, generally speaking buffers and QOS is about what traffic you sacrifice in the name of other traffic. That why things like weight deficit round robin, LLQ, PQ etc queuing methods exist. On the internet your not doing QOS, buffers just provide a FIFO queue to store packets for as i said, handling the inherently busty nature of TCP. You then have to consider the nature of distributed switch architectures which can lead to massive oversubscriptions and that lands in the egress queue. Would you rather you just dropped those packets on the backplane or do you want as a consumer to be paying for ~20-30 times more expensive hardware for centrally processed network devices ( rough price per port between a DC switch like arista/cisco and a cisco ASR equivalent port)

if you look across switches and routers from 100mb to 100gb interfaces you will see a standard target of 10-15ms.


there isn't some magical customer/carrier boundary here. If we are just talking about internet there is no intelligent QOS only FIFO, so if your flow is being buffered significantly it is also likely being dropped. If you didn't have buffers then you would just be dropped more.

I'm not making my point very well, I agree that if we're talking about a customer site then network latency and all that joy is controlled by the customer and can be dealt with by just switching kit to the right solution. Buffering is also an absolutely essential part of any switching system and it would be utterly daft to drop it.

My concern around buffering is not that it's bad or a negative but that it does increase latency in unpredictable ways when you are sending data across the internet. It doesn't matter or improves performance for most tasks (particularly streaming) but does impact real time services in ways that are unpredictable and hard to model. Especially as the end points are standard consumer internet connections which enjoy no SLAs or special ISP attention in the way that corporate connections do.

How much any of this matters to Crackdown is impossible to answer as they haven't described what is offloaded to the cloud (is it all explosions, a subset that destroy buildings or something else). It will be fun to see as I want this to work I just dislike the 'BUILD demo works = Works on your home ADSL' idea that has been waved around since the announcement that the BUILD demo was part of the Crackdown research.
 
The bazooka/rocket launcher that is controlled by the presenter is hitting random locations.

Yes sorry I didn't mean specific in terms of a predetermined shots, a better phrase might have been user controlled shots. Whether or not there is such freedom in Crackdown is another story although I would assume that is what MS is shooting for so to speak.
 
Would it make sense for a game like No Man's Sky to use cloud based procedural generation of worlds and creatures?
no, you could see the game struggle to generate parts of it fast enuf in the video and this is on the local machine! Imagine if it was
1. send position to server
2. generate scene(*)
3. send back result over the internet // lag

OK 1&2 can be alleviated somewhat cause I assume the machines being very powerful will be generating a larger area than whats possible on a single PC/console, but #3 would kill the game things would be popping onto the scene (unless you had that google superfast internet connection)

though it is possible to use something like Gaikai but then you have a low quality picture

(*)near impossible to cache the results fully since the data is immense
earth = 51 PB storage, resolution = 10x10cm (1 pixel = byte) just for the color
 
What if instead of sending a mesh with the data for every single vertex, for every piece of degree, they define objects that are stored locally (the smallest unit of an item that can be destroyed), and send only two pieces of data that describe it's position and orientation? You could shoot a wall with a gun that fires bullets and break off very small pieces which would be processed locally, because you wouldn't be collapsing structures. But if you were to shoot a wall with a rocket, you're going to destroy a sizeable piece. You could define larger pieces of debris as a collection of the smallest unit. That would minimize the amount of data you'd have to send. I don't know how Battlefield describes debris when you're destroying structures. It seems fairly primitive.
 
I remember reading about how RTS handles hundreds of units being synced in a multilayer session and how the math would actually behave differently on different CPU, it boils down to being totally deterministic so that the session participants stayed in sync. I'd imagine the synchronization of cloud physics would be similar to this.
 
What if instead of sending a mesh with the data for every single vertex, for every piece of degree, they define objects that are stored locally (the smallest unit of an item that can be destroyed), and send only two pieces of data that describe it's position and orientation?
Already suggested in this thread. ;)
 
So like, we went from "cloud physics is impossible" to "it'll be too expensive for the developers" to "it'll be too expensive to MSFT"

I'd argue it's not, otherwise why would they give it for free?

what's next? "cloud physics will cause [strike]global warming[/strike] climate change?"

Honestly this slippery slope is getting way too slippery.

Excuse me. Where did I say it was too expensive? I was simply exploring the costs involved. Please don't be the kind of poster that tries to distort the post he's responding to.
 
Not that it will affect me much, since I seldom play really old games. But what happens with old games that needs the offload capability the cloud gives, 2-3-4-5 years down the road. Publishers close online multiplayer servers etc today, what about single player games that needs to offload computation?

If its just a cooler destruction of a building, fine you can use a canned offline version, but are there things that would break the game if the cloud was not available?

I am quite sure that there is a clause in the cloud providers contract that says it will not run the service for the developer for all eternity for free.
 
Back
Top