Server based game augmentations. The transition to cloud. Really possible?

I'm thinking of from the opposite side. The player. Doesn't the power of cloud depend on both server and client? How can devs count on the clients connection?
Edit: So your playing your shiny new game and for some reason your internet connection drops or slows down and all of a sudden features start dropping off? Or because your internet speed fluctuates the game pauses and a message comes up saying" please wait while we buffer the cloud"?

That's something devs have been faced with for decades. It's why we have mip maps and LODs and now things like tesselation. Devs are quite skilled typically in transitioning gracefully across thresholds of fidelity. That said, there will likely someday be games requiring a connection at all times due to cloud integration, in which case the game can just hop into a suspended state the moment it loses a connection.
 
That's something devs have been faced with for decades. It's why we have mip maps and LODs and now things like tesselation. Devs are quite skilled typically in transitioning gracefully across thresholds of fidelity. That said, there will likely someday be games requiring a connection at all times due to cloud integration, in which case the game can just hop into a suspended state the moment it loses a connection.

I think if people realistically manage their expectations about what is possible it could have some benefits. But no matter how good devs are there is only so much that data over a network can compensate for vs actual hardware in the machine if that what people are hoping for. I just cant see devs making key features of their game that would normally be very hardware intensive, dependant on cloud at this point.
 
It seems you are making many assumptions. I don't think anyone is implying that every game will be forced to use the cloud. I certainly wasn't implying that. Also games like Diablo 3 that can be played single player have shown that you can be a big success even if you are forced to be online in single player games. I am not saying it is the best/worst choice. That would be up to the devs/publishers and their unique circumstances.
Diablo 3 needs the internet for DRM purposes. Its is not an example of server based augmentation. The game is run and processed on your hardware locally.
You didnt understand the subject of the discussion entirely hence why you bring up online gaming and unrelated examples like Diablo 3.
Cloud based game augmentation is something different. A portion of your game is processed on your local hardware and another portion is processed from an external network source

The transition to cloud (see thread title) suggests an expanding trend of cloud based augmentation in games. Not just a few rare examples here and there for a niche market. If this is the case there is no trend and hence we arent transitioning yet which makes it off topic
I am simply saying that there is evidence consumers will accept it. You may not personally like that but that is irrelevant. Businesses will not make decisions based on what you like. Also remember this is just illustrating a worst case scenario. They could degrade gracefully as well.
Your "evidence" does not apply for the reasons I described earlier.

I said in the worst case scenario. Maybe there won't be any issues at all.
Maybe

Where did I tell you to deal with it? I am not trying to convince you to accept or deal with anything. I am showing that devs/consumers have faced similar problems in the past. Read what I actually said:

"People will not like it but most people will deal with it". There is evidence of this already. Many people deal with lag/disconnecting in online games already. I didn't say that all people would or that you specifically would need to deal with it... I am simply stating that people are already faced with these problems all around the world.

I didn't tell you to deal with it. I was illustrating the worst case scenario. Where did I say it was a non-problem? You are making a lot of assumptions...:???:

I am simply stating that even if it isn't perfect many customers will accept it because we have evidence that similar situations have been accepted in the past.

Your whole argument is based on the assumption that online gaming and the rest of game experiences have evolved in the same manner, are perceived and treated in the same way and hence people are automatically willing to accept problems related to network inefficiencies in gaming experiences that previously did not have to deal with

Yes, but these consoles will be around for years. Also to base our conlusions on what is available today is illogical because the possibilities will improve as the consoles age.

My conclusions are related to the current situation only and not the future situation which, like you, I also expect to change and improve and hence support cloud based game augmentations it is meant to be
 
Let's throw out something.

There is a broad spectrum between what Gaikai do and running everything locally.
For example I could theoretically run my entire game on the server minus the graphics, and instead of streaming video, I stream object state.
That seems no less feasible to me than the Gaikai model. Unless you think streaming video is lot less data that streaming game state.
Since all the physics/AI etc is in the cloud you can use all the local resources for making it look prettier.

I have perhaps 40K a frame at 30fps on a 10Mbit link, that's a lot of state and the only state I need is the stuff immediately around me. Most online games manage with less that 1/10th of that.
Aren't most online games very simple in data, just sending what player is where? Positions and rotations. Actually calculating the game online means far more data needs to be sent, I think, like sending transformed vertices from a skeleton-mapped mesh. Take something like Battlefield's destructive environments - if calculated on the server, the server will have to send copies of the new meshes and textures of the newly created object fragments before it can send updated position and orientation vectors for each piece. The moment we start calculating the whole game on the server, we have to stream complete assets to the client. Gaikai only manages this by lossy compression.

Also, if you're still rendering locally, you're going to need the GPU and RAM and bandwidth, and that's the major cost of the console, bringing into question the value of shifting a small percentage of the work to the cloud instead of keeping the performance and convenience of local processing.

Seems to me that server-based gaming by streaming video is quite a different kettle of fish to server-assisted local gaming. The option to lossily compress user data means the issues of network connections are much reduced. The concept of server game processing is something akin to taking the CPU out of your PC and putting it on the internet, replacing those PCI-E connections with a broadband internet connection. Gaikai on the other hand is putting the whole PC on the internet and connecting just the display out to your monitor over a broadband connection.
 
Aren't most online games very simple in data, just sending what player is where? Positions and rotations. Actually calculating the game online means far more data needs to be sent, I think, like sending transformed vertices from a skeleton-mapped mesh. Take something like Battlefield's destructive environments - if calculated on the server, the server will have to send copies of the new meshes and textures of the newly created object fragments before it can send updated position and orientation vectors for each piece. The moment we start calculating the whole game on the server, we have to stream complete assets to the client. Gaikai only manages this by lossy compression.

Also, if you're still rendering locally, you're going to need the GPU and RAM and bandwidth, and that's the major cost of the console, bringing into question the value of shifting a small percentage of the work to the cloud instead of keeping the performance and convenience of local processing.

Seems to me that server-based gaming by streaming video is quite a different kettle of fish to server-assisted local gaming. The option to lossily compress user data means the issues of network connections are much reduced. The concept of server game processing is something akin to taking the CPU out of your PC and putting it on the internet, replacing those PCI-E connections with a broadband internet connection. Gaikai on the other hand is putting the whole PC on the internet and connecting just the display out to your monitor over a broadband connection.

How does the current Battlefield client-server model handle the destructible environments use case? Is the server not calculating the destruction? Can clients get out of sync with each other on what the building remains look like?
 
Mmmm I start of with saying I have not read the whole thread, so this might have been pointed out before.

I doubt that its IQ that will be enhanced directly by the cloud, maybe the cloud will free up resources on the X1 so that it will be able to do more stuff that enhances IQ. Local CPU/GPU/Memory can be used for calculating IQ enhancing stuff and less of persistent game world stuff.

MS keep saying bigger levels and a more persistant world. Was this not what we saw on Skyrim PS3 vs X360. PS3 did not get some update before long after due to not having the memory to accomodate all the new things in the expansion?
For instance if we drop a glove in an area, this can be stored on the cloud when we leave the area and then it will be dropped from local memory. When we get back to this area again, query the cloud about any volatile items etc that might be in this area and load it into memory with all its info, but visual assets will come from HDD/Disc as normal.
Without the cloud storage this data would have to be stored in memory all the time, imagine if you got bullet holes across a whole GTA city that will stay for your complete play through?
In addition you can do cloud calculations on based on the info.

For instance if we have put tons of bullets into a column in area B of the city earlier. And while we are in area C we trigger an earthquake, then the cloud instance could determine if the damage we have done previously on the column made it weak enough to come crashing down during the earthquake or not.
So when we get back to area B, it will be different for each player, depending on what they have done before.
Of course this can be done without the cloud, but in much smaller scale than with the CPU and 3GB of memory that we got access to.

Just my $0.02
 
How does the current Battlefield client-server model handle the destructible environments use case? Is the server not calculating the destruction? Can clients get out of sync with each other on what the building remains look like?

That's actually a good question. Id never thought about it. I can't remember if they had dedicated servers or not, but Xbox Live has a very small bandwidth limit. Either the data necessary is ver small or the data is compressed when transferred.
 
How does the current Battlefield client-server model handle the destructible environments use case? Is the server not calculating the destruction? Can clients get out of sync with each other on what the building remains look like?

That's actually a good question. Id never thought about it. I can't remember if they had dedicated servers or not, but Xbox Live has a very small bandwidth limit. Either the data necessary is ver small or the data is compressed when transferred.

If they can get out of sync, then the actual gameplay implications of BF3s destructible environments have been misleading . If you destroy a wall or part of a wall, then the assumption is that you can now see people behind it and no longer use said wall as cover. If clients can get out of sync then i'm not sure how this gameplay feature can work.
 
If they can get out of sync, then the actual gameplay implications of BF3s destructible environments have been misleading . If you destroy a wall or part of a wall, then the assumption is that you can now see people behind it and no longer use said wall as cover. If clients can get out of sync then i'm not sure how this gameplay feature can work.

Correct. If it is handled client side, server is merely provided the data points necessary to perform the physics calcs to the client(explosion center coordinates, power, etc) and the engine is always guaranteed to always generate the same results based on that information. Mainly just curious of which case it is, server-side or client-side destruction calculations.
 
Mmmm I start of with saying I have not read the whole thread, so this might have been pointed out before.

I doubt that its IQ that will be enhanced directly by the cloud, maybe the cloud will free up resources on the X1 so that it will be able to do more stuff that enhances IQ. Local CPU/GPU/Memory can be used for calculating IQ enhancing stuff and less of persistent game world stuff.

MS keep saying bigger levels and a more persistant world. Was this not what we saw on Skyrim PS3 vs X360. PS3 did not get some update before long after due to not having the memory to accomodate all the new things in the expansion?
For instance if we drop a glove in an area, this can be stored on the cloud when we leave the area and then it will be dropped from local memory. When we get back to this area again, query the cloud about any volatile items etc that might be in this area and load it into memory with all its info, but visual assets will come from HDD/Disc as normal.
Without the cloud storage this data would have to be stored in memory all the time, imagine if you got bullet holes across a whole GTA city that will stay for your complete play through?
In addition you can do cloud calculations on based on the info.

For instance if we have put tons of bullets into a column in area B of the city earlier. And while we are in area C we trigger an earthquake, then the cloud instance could determine if the damage we have done previously on the column made it weak enough to come crashing down during the earthquake or not.
So when we get back to area B, it will be different for each player, depending on what they have done before.
Of course this can be done without the cloud, but in much smaller scale than with the CPU and 3GB of memory that we got access to.

Just my $0.02
I think that basic idea is also doable on local storage like a HDD. I am not sure if it makes sense to waste huge server resources for keeping all kinds of game states that the consumer may or may not revisit.
Lets say a million seller has this kind of feature you speak of done through cloud. Lets say now that these million users leave millions of game related states saved in the cloud.
It could be a glove left somewhere, a growing tree you planted somewhere else, a building you left half destroyed etc etc. They could be thousands or millions of state variations. The gamer could finish the game and not revisit it or he might revisit some areas while others he might never will.

Many of these game states will have to be stored and left there just to be on the safe side that a consumer might revisit what he has left. Otherwise some consumers might be pissed that they go back and discover that some things are not like they have left them. The glove disappeared, the big tree disappeared or its growing from the beginning, the building is intact.

And then there are the type of gamers that love to revisit old games. It would have pissed a lot of gamers if they discovered that their playthrough disappeared or that they can progress a game or cant have access to some features because the company has closed the servers.

For things to be on the safe side the things that are calculated through the cloud should be related to experiences that happen instantly and do not last in time and leave game world states for the HDD to handle
 
Aren't most online games very simple in data, just sending what player is where? Positions and rotations. Actually calculating the game online means far more data needs to be sent, I think, like sending transformed vertices from a skeleton-mapped mesh. Take something like Battlefield's destructive environments - if calculated on the server, the server will have to send copies of the new meshes and textures of the newly created object fragments before it can send updated position and orientation vectors for each piece. The moment we start calculating the whole game on the server, we have to stream complete assets to the client. Gaikai only manages this by lossy compression.

I wasn't suggesting sending transformed meshes.
Just deferring the bulk of the none graphics work to the cloud. Increasing local resources for graphics.
Also many compression mechanisms can be applied here, quantization, interpolation, etc.
Games do not spend the bulk of their time submitting meshes.

Also, if you're still rendering locally, you're going to need the GPU and RAM and bandwidth, and that's the major cost of the console, bringing into question the value of shifting a small percentage of the work to the cloud instead of keeping the performance and convenience of local processing.

But you still end up with more, it's pointless to compare local processing to remote processing, because you have both. Doing anything remotely reduces the local cost even if it's more expensive or less convenient to do it remotely.

Seems to me that server-based gaming by streaming video is quite a different kettle of fish to server-assisted local gaming. The option to lossily compress user data means the issues of network connections are much reduced. The concept of server game processing is something akin to taking the CPU out of your PC and putting it on the internet, replacing those PCI-E connections with a broadband internet connection. Gaikai on the other hand is putting the whole PC on the internet and connecting just the display out to your monitor over a broadband connection.

Not saying they are the same, it was more a mental excercise, just pointing out to people concerned about the latency just how much could be moved to the server.

And FWIW I have actually shipped a game where in multiplayer mode, the client had a better experience than the server, because of the reduced CPU load.

I'd be tempted to try it, it's mostly the same work you'd do for multiplayer anyway, though working in two environments would suck. Obviously you'd make player state local in any real system to avoid the input latency.

There is obviously a limit to how much graphical improvement you can get out of something like this. You still have to draw and shade things. But you could probably improve animation, and almost certainly physics.
The question becomes is graphics the place we'll see the bulk of the improvement in games?
 
How does the current Battlefield client-server model handle the destructible environments use case? Is the server not calculating the destruction? Can clients get out of sync with each other on what the building remains look like?
I didn't know we had a test-case! AFAIK XB360 games are peer-to-peer, so there must be communication of the destruction, which itself I don't know how it's managed. Physics computations don't run 100% constantly between runs, so you can't assume that the same impulse applied to a wall on one console will yield the same results on another console in exactly the same state. But the differences should be minimal, so maybe they just run the physics locally and if one player can just see around a wall that another can't because there's a brick in the way in his version of the universe, no-one's really going to notice, especially against the other limitations of online player where players can be standing in two different places across consoles.
 
Speaking of the "sphere if influence" stuff, does anyone have a link to that paper on networked physics, I believe it came with a demo of little cubes you could roll over katamari style....From reading that paper it was clear this was NOT an easy task, and it was hundreds of times more simplistic than what is being suggested... The sheer amount of nonsense they would have to get around to get this working is mind boggling.


Edit: For battlefield, i'd assume a player shoots an obstruction enough times that it breaks, then pushes "terrain000x=0" to the other players, 0 being a broken state...and it breaks on their screen. If its a dedicated server i'd assume the same way, just with the server handling and pushing the "state" around.
 
Last edited by a moderator:
In the Dead Nation talk about networked games, they talked about how they made sure the AI was fully deterministic, so they didn't have to send any AI data over the internet, making lag in that case completely irrelevant.

Not sure how different BF3 is in that regard with physics.
 
Another use for cloud computing that might be valuable is more complicated/reflective AI, where the cloud would calculate new sets of AI behavior guidelines while the local cpu would just run down an extensive decision tree provided by the results of that calculation.

For example in Forza after every lap perhaps the behavior of all computer agents and the player is pushed to the cloud. The behavior of every AI and Player can then be calculated into a new overall strategy individually by each AI agent according to what they observed on the road. Say Agent 4 saw the player go wide through left hand turns but were tight through right hand turns, then the AI would know where a players weaknesses are and adapt their strategy to overtake the player or other AIs based on this updated version of their overall strategy rather than just simply attempting to an adhere to a best line method of computerized racing. No idea if this is really all that computationally difficult or being done already on local processing, but really only the moment to moment decisions of the agents need to be on the console, the overall strategy can be continuously recalculated in the cloud to make itself more intelligent.
 
Not sure how different BF3 is in that regard with physics.
A long time age we had devs on this board explain that physics engines are rarely deterministic, which goes against expectations seeing as a computer is pure logic dealing with the same numbers in the same way. I know with my own simple experiences using physics engines you get different results every time you run a simulation with exactly the same parameters. Things may have improved since then but I doubt it until corrected by someone who knows. ;)
 
A long time age we had devs on this board explain that physics engines are rarely deterministic, which goes against expectations seeing as a computer is pure logic dealing with the same numbers in the same way. I know with my own simple experiences using physics engines you get different results every time you run a simulation with exactly the same parameters. Things may have improved since then but I doubt it until corrected by someone who knows. ;)

I think your misunderstanding what was said.
For a given input state,and a given set of inputs a well written physics engine will be deterministic. In fact if it's not how do you test it?
It is not easy to guarantee that same input state, and moreover commonly time step is one of the inputs so as frame rates vary so will results.

It is mostly uncommon to rely on deterministic anything any more. The simplest network system is to have a game be entirely deterministic based on controller input, and just dispatch the controller input on all instances simultaneously. This is mostly how network games worked in the days on point to point modem connections.
That doesn't work on the Internet because of packet loss and out of order delivery. There was a great breakdown of how this fails in a tie fighter post mortem. Though madden used a version of it for years by sending redundant copies of the controller data.
Games today usually run a varying frame rates and minimally time is an input, so the usual approach today is to run local simulation (can be approximate) and correct state as often as is practical, interpolating between the local state and the corrected state, to avoid discontinuities.
Additionally today rather than absolute client server, usually a client has the master copy of the players state and is responsible for resolving some of the players actions.
The amount of corrected data that can be sent is limited by the upstream bandwidth of the connections, obviously not the case for data from the cloud, where you would hope the downstream connection would be the limit.
 
There is obviously a limit to how much graphical improvement you can get out of something like this. You still have to draw and shade things. But you could probably improve animation, and almost certainly physics.
The question becomes is graphics the place we'll see the bulk of the improvement in games?

I feel like at this point in terms of rendering tech games are very much at an impasse. By that I mean that in my view the amount of power we have to throw at making visuals notably better from a tech graphics standpoint is too high.

I'd argue that with most material shaders today in upcoming next gen game engines the quality is close enough that most users won't be able to know if the lit properties are photorealistic or not for most objects in a game, and as such the low hanging fruit that WILL vastly improve how realistic or convincing our brains decide a game world is will be tied to more believable movement. Physics and animation namely, but also AI that governs realistic npc behavior and informs those animations too.
 
Another use for cloud computing that might be valuable is more complicated/reflective AI, where the cloud would calculate new sets of AI behavior guidelines while the local cpu would just run down an extensive decision tree provided by the results of that calculation.

For example in Forza after every lap perhaps the behavior of all computer agents and the player is pushed to the cloud. The behavior of every AI and Player can then be calculated into a new overall strategy individually by each AI agent according to what they observed on the road. Say Agent 4 saw the player go wide through left hand turns but were tight through right hand turns, then the AI would know where a players weaknesses are and adapt their strategy to overtake the player or other AIs based on this updated version of their overall strategy rather than just simply attempting to an adhere to a best line method of computerized racing. No idea if this is really all that computationally difficult or being done already on local processing, but really only the moment to moment decisions of the agents need to be on the console, the overall strategy can be continuously recalculated in the cloud to make itself more intelligent.

You mean basically updating enemy AI strategies based on what the player has done prior? Interesting. Hadn't though of that! I'd assumed enemy AI in combat (or racing, etc) couldn't really be utilizing the cloud but that's actually a really exciting idea.
 
Maybe console manufacturers should bring back the "PPU" and the "AIPU".

Why should high end physics and ai take away resources from overall gpu/cpu performance?

With a ppu and a aipu you could get away with providing high-end physics and ai without sacrificing graphical shader performance.


What if you connect to the cloud server network with a connection that is NOT an internet connection?

Would that remove the latency problem?
 
Last edited by a moderator:
Back
Top