Server based game augmentations. The transition to cloud. Really possible?

The data being sent would already be extremely small in the first place most likely. We aren't talking about HD textures or huge batches of geometry most likely.
Only can send small data. And latency sensitive.

Very limited application.



This paragraph literally doesn't make sense. *ANY* resources that would normally have to be used for latency insensitive computations can be moved to the cloud (in theory) and out of the way for the local hardware to be vastly more efficient than it otherwise would have been with those operations clogging up the processing.
Yes it does.

Pre baked lighting. Typically done on loading.

Texture generation. Done on loading.

and etc etc. Done on loading.

So you outsourced things never burdened during actual gameplay, but during loading screen. Great still no real effect on graphics.

That is THE point with the cloud. That is the mechanism by which MS contends the local box becomes effectively more powerful over time as devs get more progressive in utilizing those remote resources. And DF didn't even really touch on that.
And you or MS reps cant explain how it would work.
 
Last edited by a moderator:
What kind of data are we talking about, for what operations and whats the average size that it would send.

Depends on the task, but just as an example of something that could add to the visuals in a meaningful way, take animations. In the KZ demo level GG had 75MB of animations. Physics meshes were 5MB and AI was 6MB.

http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf

Do we have any other data for other titles? Maybe Crytek or DICE have similar memory budgets shown in their numerous ppt's?
 
What about really heavy persistent AI for all NPCs/creatures throughout some open world MMO game. They learn, and they live on the cloud.
 
Depends on the task, but just as an example of something that could add to the visuals in a meaningful way, take animations. In the KZ demo level GG had 75MB of animations. Physics meshes were 5MB and AI was 6MB.

http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf

Do we have any other data for other titles? Maybe Crytek or DICE have similar memory budgets shown in their numerous ppt's?

You could maybe stream in animations when you need them, those 75MB of animations would probably take over a minute of a every day connection so you could probably stream them in but you'd have to start very very early. Physics, physics is something that is latency dependent so unless the player has 0 interaction with it at all and its just something akin to a cutscene then I don't really see it being doable.

AI maybe. as long as its only over arching crap and not actual play interactions as player interactions are also very latency dependent.
 
warb, I've seen lots of ppl talk about that but honestly I'm a little unsure about the payoffs of something like that. Ok so you've got stupid amounts of richly detailed data and player stats and whatnot. What can you do with that kinda thing in terms of something the player would find especially compelling? Specifically...?
 
Last edited by a moderator:
You could maybe stream in animations when you need them, those 75MB of animations would probably take over a minute of a every day connection so you could probably stream them in but you'd have to start very very early. Physics, physics is something that is latency dependent so unless the player has 0 interaction with it at all and its just something akin to a cutscene then I don't really see it being doable.

AI maybe. as long as its only over arching crap and not actual play interactions as player interactions are also very latency dependent.

it would be weird seeing things react very out of sync. Walking. Your feet and ground moving at different rate. Act to being shot differently. Killzone has very good list of animations. It would not work this way.

Maybe if you walk into a strip club in GTA 5 and you cannot interact with stripper "hands off" policy.

Again, highly limited application.
 
You could maybe stream in animations when you need them, those 75MB of animations would probably take over a minute of a every day connection so you could probably stream them in but you'd have to start very very early. Physics, physics is something that is latency dependent so unless the player has 0 interaction with it at all and its just something akin to a cutscene then I don't really see it being doable.

AI maybe. as long as its only over arching crap and not actual play interactions as player interactions are also very latency dependent.
There could be less lag than typical online multiplayer.
 
it would be weird seeing things react very out of sync. Walking. Your feet and ground moving at different rate. Act to being shot differently. Killzone has very good list of animations. It would not work this way.

Maybe if you walk into a strip club in GTA 5 and you cannot interact with stripper "hands off" policy.

Again, highly limited application.

How about AI, physics, and scripts being played out at a distance and become interactive if you enter a certain proximity? It could really make large, open world games feel much more alive. It's a limited application but in a game portraying a bustling city I can see it making a large impact.
 
You could maybe stream in animations when you need them, those 75MB of animations would probably take over a minute of a every day connection so you could probably stream them in but you'd have to start very very early.

Sure, but so what? I mean, streaming in anims can take several mins for all the player cares. So long as the player isn't directly interacting immediately with the objects in the scene playing out these cloud-computed anim sequences it's not something players will notice. Some could even be streamed in and 'pseudo interactive' via triggering. I gave an example of an avalanche earlier where as you get close to a mountain perhaps the cloud computes a large scale physics-based avalanche/destruction animation that is streamed in and waiting to be triggered by the player.

Physics, physics is something that is latency dependent so unless the player has 0 interaction with it at all and its just something akin to a cutscene then I don't really see it being doable.

Not necessarily true. There is a LOT more physics going on in a game than I think you realize. If my character jumps into the air, as he is in the air the game physics should know where/how he will land and can compute what impact that could have physically on the game world. For instance:

8782102157_8df154b868_o.gif


Note that the player could potentially be in the air for many frames before the effects of the landing need to be displayed. Depending on how long an object (player, car hurling through the air, etc) is in the air the impact animation on whatever it collides with would be there waiting to be displayed by the time the collision takes place.

Also, don't underestimate physics-based animations like the one below:

8782065639_6ea9b6dcec_o.gif


The reason this scene here with the bridge is so amazing visually is due to the richness of the detailed, realistic, physics-based animation playing out on screen. Sure, something like this wouldn't be interactive, but again, say in the game the player can influence the trajectory of the boat slamming into the bridge, what objects are on the bridge, etc. The boat would take a good chunk of time to actually initiate said collision, so that kinda thing should be doable int he cloud too. This again would be loosely interactive/dynamic. If ya don't like that terminology you are welcome to call it what you find more appropriate.

Hmmm...anyone happen to know how big a typical, fully scripted in-engine cutscene is in terms of memory? That is basically nothing but physics/animation scripting usually afaik, so maybe that can help us get more of an idea specifically on the really complex anims like the scene above.

AI maybe. as long as its only over arching crap and not actual play interactions as player interactions are also very latency dependent.

I originally thought it couldn't directly affect player combat/interactivity too but someone else here made a good counterpoint to that assumption. You could collect data on player strategic tendencies, compute counter strategies, stream them into the game to the enemies, and that would directly affect their AI. It wouldn't be instant reactions, but in the real world it wouldn't be instant either and you would still be doing something dynamic and interactive. Just with some small delay which given the context would be appropriate anyhow.
 
So besides shortening loading time. Wont be much use.

Well, that is what I am curious about...depending on the amount of data involved, depending on your internet bandwidth, depending on which services you run besides the game on your X1/household that eat some of the bandwidth, I wonder if you may end up needing even longer loading screens to e.g. get your prebaked lighting data.
 
How about AI, physics, and scripts being played out at a distance and become interactive if you enter a certain proximity? It could really make large, open world games feel much more alive. It's a limited application but in a game portraying a bustling city I can see it making a large impact.

This was what I was thinking too. Even in a game like Battlefield...say you place C4 on the pillars of a building. The time it takes to explode can potentially allow for the results to be calculated in the cloud and streamed back to the console. Shoot a rocket at some static geometry, that projectile takes time to get there. If it is far enough away we could be doing the impact and accompanying destruction in the cloud (would need to be pretty far away, but still something to think about). Call in an air strike? Again, doable in the cloud.

There are LOADS of interactive scenarios where complex, deterministic events are triggered and yet there is a delay involved.

I think DICE's stuff is especially interesting to consider. Maybe we should look at their numerous ppt's on their tech to see if we can find animation info and especially physics/destruction info.
 
What about really heavy persistent AI for all NPCs/creatures throughout some open world MMO game. They learn, and they live on the cloud.

This is an interesting idea which would be one of the first things coming to mind.

My question is: who pays for this? For this, you allocate 24/7 of cloud resources for a game. 24/7 must be expensive if the task is really so complex that it needs substantial resources. If it does not need substantial resources, than there seems to be nothing special about it and using the cloud for it seems mood, except MS offers a particular easy to use system...otherwise it might be cheaper and simpler for the publisher to host an own server.
 
This is an interesting idea which would be one of the first things coming to mind.

My question is: who pays for this? For this, you allocate 24/7 of cloud resources for a game. 24/7 must be expensive if the task is really so complex that it needs substantial resources. If it does not need substantial resources, than there seems to be nothing special about it and using the cloud for it seems mood, except MS offers a particular easy to use system...otherwise it might be cheaper and simpler for the publisher to host an own server.

I think it would be overkill. I really don't need AI running on a NPC patrolling a cave on the other side of the virtual world. And for the more important NPCs, simple algorithms that accommodate real world time changes can be used to make up for lapses between gameplay.
 
I think it would be overkill. I really don't need AI running on a NPC patrolling a cave on the other side of the virtual world. And for the more important NPCs, simple algorithms that accommodate real world time changes can be used to make up for lapses between gameplay.

That's along the lines of what I was wondering too. I know ppl are stoked for large scale metadata but I'm having trouble imagining uses for it. I can see world sim being really cool in the sense of the world changing physically in the game or ecosystems or AI in the sense of simulating npc's having lives or whatnot. I'm not so sure how obvious that kinda thing would be to the player though.

If I'm the player, what is the impact I experience while playing? For stuff related to physics or anims or lighting the answer is obvious. To some aspects of world sim and AI maybe it seems more subtle.
 
I am contributing, by pointing out your lack of knowledge and fanboy hypocrisy.

This was what I was thinking too. Even in a game like Battlefield...say you place C4 on the pillars of a building. The time it takes to explode can potentially allow for the results to be calculated in the cloud and streamed back to the console. Shoot a rocket at some static geometry, that projectile takes time to get there. If it is far enough away we could be doing the impact and accompanying destruction in the cloud (would need to be pretty far away, but still something to think about). Call in an air strike? Again, doable in the cloud.

There are LOADS of interactive scenarios where complex, deterministic events are triggered and yet there is a delay involved.

I think DICE's stuff is especially interesting to consider. Maybe we should look at their numerous ppt's on their tech to see if we can find animation info and especially physics/destruction info.

You're arguing that the destruction from explosives in BF can somehow be decoupled from the host and clients, sent to MS' servers, calculated, and the results shot back and there be no effect/delay from all the players' view? Are you serious? That's not a serious statement. That's just not based in reality.

Let's say you fire this hypothetical rocket and it won't hit anything for 10 seconds...until somebody drives a jeep from around a corner that you didn't see. Now there's a completely different set of results is required, delaying your mystical offloaded explosion physics. This does not work for any real system. Multiple people have told you this. DF told you this. sebbbi told you this. It's fantasy.

DO you have anything actual, factual to bring to a TECH discussion besides fantastical arguments not based in reality? Again, just link to one legit article that can back up 1/10th of your wild conjectures.
 
This is an interesting idea which would be one of the first things coming to mind.

My question is: who pays for this? For this, you allocate 24/7 of cloud resources for a game. 24/7 must be expensive if the task is really so complex that it needs substantial resources. If it does not need substantial resources, than there seems to be nothing special about it and using the cloud for it seems mood, except MS offers a particular easy to use system...otherwise it might be cheaper and simpler for the publisher to host an own server.

Perhaps the info is released in a reverse order (like what Sony used to do in PS3 era).

The cloud computing platform may be the main dish, serving Windows phones, tablets and consoles. You buy the game once and use it on all 3 platforms. But you can't resell the games. It's exactly like the iOS model (except that you can buy disc games and convert them into digital games by installing to your console).

The cloud will compensate for the lack of computation power in phones and tablets. The home console should not need the cloud's help in general. But there should be new ideas (e.g., Forza car sim, user generated contents, etc.) that can be done in the cloud to improve the experiences.

To fund this thing, MS will want to make money from all avenues like iOS/Android (i.e., advertising, freemium/transaciton cut, subscription, hardware, software licenses).

That's the vision. They may have to start with select games to showcase the concept initially. e.g., They can't use the cloud for real-time rendering, but it may be okay for turn-based game rendering. They can also do user generated contents, assorted simulation, MMOs, and general community services in the cloud.
 
This is an interesting idea which would be one of the first things coming to mind.

My question is: who pays for this? For this, you allocate 24/7 of cloud resources for a game. 24/7 must be expensive if the task is really so complex that it needs substantial resources. If it does not need substantial resources, than there seems to be nothing special about it and using the cloud for it seems mood, except MS offers a particular easy to use system...otherwise it might be cheaper and simpler for the publisher to host an own server.


Dunno, I think that's one of the open questions here.

Presumably I guess the ideal is MS goes to third parties and says "here's some computing resources in the cloud if you want, all free, have fun!"

I guess the vague theory would be something like MS has all these Azure servers anyway, that may not be getting full utilization.

Whether there will be any charges to third parties, again,. open question. I'd hope not at least initially, and that eventual charges would be low.

Cloud computing should be very efficient though, just simply because only a small percent of console owners are actually playing at any given time.

If you only had virtual consoles in the cloud, you'd need much less available compute resources to serve all the 360 players actually playing at any given time, than there are X360's in the world for example.

But you'd have to have the ability to scale to some peak usage (EG COD 6 just came out, it's prime time saturday evening), too. That peak still much lower overall though.

I guess the key would be finding something useful for those servers to do if not playing Xbox. It would have to be non critical stuff, since they'd have to be "available" on a instants notice.
 
Dunno, I think that's one of the open questions here.

Presumably I guess the ideal is MS goes to third parties and says "here's some computing resources in the cloud if you want, all free, have fun!"

I guess the vague theory would be something like MS has all these Azure servers anyway, that may not be getting full utilization.

Cloud computing should be very efficient though, just simply because only a small percent of console owners are actually playing at any given time.

If you only had virtual consoles in the cloud, you'd need much less available compute resources to serve all the 360 players actually playing at any given time, than there are X360's in the world for example.

But you'd have to have the ability to scale to some peak usage (EG COD 6 just came out, it's prime time saturday evening), too. That peak still much lower overall though.

I guess the key would be finding something useful for those servers to do if not playing Xbox. It would have to be non critical stuff, since they'd have to be "available" on a instants notice.

I am exactly wondering about the argument that only a small percentage of X1 users actually play at the same time. If devs go for features that are 24/7 like persistent worlds...the above argument is a bit difficult, so I guess that in such a case you have to pay more.

I am not sure how this translates to cloud computing, but in HPC, computing time is very very expensive...and I am happy that we as academics get it basically for free, but I know situations were it is too expensive even for big companies to rent cluster time, so they went ahead and build there own cluster instead.

Someone has to pay for the computing resources. But maybe that is something that has to be developed again and may even change over time depending on the success of the cloud.

How much can they increase the Live Gold fee until it hurts and people don't buy it?
 
Perhaps the info is released in a reverse order (like what Sony used to do in PS3 era).

The cloud computing platform may be the main dish, serving Windows phones, tablets and consoles. You buy the game once and use it on all 3 platforms. But you can't resell the games. It's exactly like the iOS model (except that you can buy disc games and convert them into digital games by installing to your console).

The cloud will compensate for the lack of computation power in phones and tablets. The home console should not need the cloud's help in general. But there should be new ideas (e.g., Forza car sim, user generated contents, etc.) that can be done in the cloud to improve the experiences.

To fund this thing, MS will want to make money from all avenues like iOS/Android (i.e., advertising, freemium/transaciton cut, subscription, hardware, software licenses).

That's the vision. They may have to start with select games to showcase the concept initially. e.g., They can't use the cloud for real-time rendering, but it may be okay for turn-based game rendering. They can also do user generated contents, assorted simulation, MMOs, and general community services in the cloud.

That could very well be and makes more sense to me for the beginning of the service. We even have the game galactic reign that exactly does this.

I think that cloud services will start slowly with the 'obvious' services first, but as time and tec development goes on, it eventually takes off during X1 lifetime...similar to the SPUs in the, which are kind of PS3's local mini cloud :)
 
In a game like Forza 5, you could use the cloud e.g. for managing and hosting global racing events...or even global racing leagues, where people all over the world can participate at tournaments.
 
Back
Top