Server based game augmentations. The transition to cloud. Really possible?

For whatever reason people seem to assume that console games are GPU bound, but IME many games are CPU limited far more often than being GPU limited.

I see that in the case of the 360... was that really the case for the Cell though? I thought the Cell had the advantage over the RSX in that regard.

For these new systems I can clearly see that being the bigger issue. Maybe thats why bandwidth and latency were key design points (and cost sinks) for these consoles.
 
Why couldn't they run the calculations and store the results somewhere and fetch that data?
I'm talking about a massive long-running city simulation where the contents evolve over time.
If I check the city's status after 5 hours, I want to see the city's evolution at 5 hours. At 10 hours, I'd expect 10 instead of the results of one hour of simulation stored on a HDD.
One of the common ways to play Sim City is to let it run for a long time to build up money or let the city develop as it may, typically running the simulation at max speed.
A simulation that isn't running the whole time as the player requested isn't a long-running simulation.

Interestingly, the latest game for some players has allegedly negated that by adding an auto-pause to reduce the load on the cloud for a game that does very little on the cloud, much less the simulation.
That, and the game's simulation is so prone to self-destruction and the lack of player saves makes running the game for any amount of time without handholding extremely risky.
 
Long term it will probably be covered by a monthly subscription to Xbox Live. Developers will be provided a maximum performance figure that they can expect to always be available for their game. This performance target will likely increase over the years.

I doubt that it will cost the developer to use the cloud. If anything Microsoft wants there to be incentive for developers to take advantage of the cloud. Cloud usage really is the best way to justify monthly subscription fees and ultimately provide a better service.

It's both smart and the future of gaming anyway so I am glad they are heavily investing in this model.
 
I'm talking about a massive long-running city simulation where the contents evolve over time.
If I check the city's status after 5 hours, I want to see the city's evolution at 5 hours. At 10 hours, I'd expect 10 instead of the results of one hour of simulation stored on a HDD.
One of the common ways to play Sim City is to let it run for a long time to build up money or let the city develop as it may, typically running the simulation at max speed.
A simulation that isn't running the whole time as the player requested isn't a long-running simulation.

Interestingly, the latest game for some players has allegedly negated that by adding an auto-pause to reduce the load on the cloud for a game that does very little on the cloud, much less the simulation.
That, and the game's simulation is so prone to self-destruction and the lack of player saves makes running the game for any amount of time without handholding extremely risky.

Maybe a good way to think about what I am saying is say computational recycling. In other words lets say you were letting your city run unattended for several hours, its highly likely that other gamers have created very similar setups and if that data has been stored the info for things like wear and tear, natural disasters and so on could be looked up and then inserted into your simulation at the appropriate times allowing those 5, 10 or 20 hours of simulation to be "simulated" with in part recycled computations. This of course assumes that the data is segmented enough to begin but if it wasn't the simulation/AI wouldn't be very good anyway server or no server boost.

The key would be multiple streams of data that were responsible for very specific sets of information which were then brought together for branching at expected intervals, with good coding this could work.
 
whats weird is maybe i misread it but there's a vibe coming out of ms of "our box isn't powerful enough, quick talk about the cloud!"

If you know already, this quickly months before release even, know lack of power is an issue, why didn't you do something about it?

Wat?

I think that's just a vibe stemming from how cynics on the internet mesh their assumptions/expectations going into the reveal with MS's vision. Those cynics are everywhere (not skeptics, cynics) and polluting any discussion of the topic on almost any forum it seems. :???:

Look at their Yukon roadmap again. They had always planned on leveraging the cloud for all the "little" stuff at the periphery, like transferring accounts, cloud saves, etc. The difference is that back then they were expecting a full scale move to the cloud for gaming akin to OnLive's streaming approach in 2015. They also wanted to be sure (back then) that no launch/early games relied much on the cloud for anything because they wanted to save that for 2015.

My guess is they realized there could be a viable interim between locally rendered games and remotely rendered games and they wanted to tap into that from the get go as a new pillar of the brand; something the competition will have to play catch up with. I think that's why they are throwing so much money around internally to make sure their 1st party games can all utilize this tech and show it off in year 1 prominently. It's a huge risk, but the reward in mindshare cannot be overstated.

This isn't a reaction to Sony or their hardware. This really is the last new platform they ever intend on making. That explains why want less coding to the metal (devs shouldn't need it with the cloud, other vendors can sell hardware in the future in different form factors). We heard a year or two ago that Lionhead's new IP was "MMO-like". They were doing experiments with cloud processing in SP games since at least Project Milo back in 2009.

They designed a balanced machine that is competitive without the cloud for a few yrs, a potential show stopper with the cloud, and eventually can act as s thin client as they move to full on cloud rendering a la OnLive once the infrastructure is there and ready for it (not nearly as far off as ppl presume imho).
 
Really not impressed by that Ars Technica piece, it just sounds like so much handwaving. The idea of bringing 'pop in' to lighting as you await the cloud data is horrible. I'll need to see something much more substantial at E3 before I'll believe this anything more than yet another example of 'everything is better in the cloud' bs that I deal with everyday in work.

Oddly the comments on that piece threw up much better ideas than the MS rep such as pathfinding or some sort of AI offload. Lagginess is much less of a concern there as you can write it off as slow reaction times or some such, in fact I dare say it would make AI seem more 'real' than the disconcerting headshot in .001 second you get with some games
 
This isn't a reaction to Sony or their hardware. This really is the last new platform they ever intend on making. That explains why want less coding to the metal (devs shouldn't need it with the cloud, other vendors can sell hardware in the future in different form factors).

Bingo. I came to the same conclusion a little while ago too. MS wants to break the back of the gaming industry hardware cycle and turn it simply into an entertainment industry which includes gaming. They want to leverage the cloud so that gaming can be universal yet still fairly high quality and is device agnostic. The caveat is that they want all of that entertainment powered by MS OS' and tools.

X86 or ARM, tablet, laptop, gaming system, PC or handheld all become ONE... The box is device that is simply an onramp to their service provision. Skydrive with compute for all of your entertainment.

Xbox ONE is the reset point.
 
I don't understand why physics isn't sensitive to latency. You want to see tress start falling down a full second after you've chopped them off? Or stuff running through the ground until the server tells your console "oh no it should have interacted with the ground"?

Unless we're talking about physics in another multiverse I don't think that will work well.

So long as the physics/animations/lighting/AI/etc being computed in the cloud isn't being interacted with by the player you can do just about anything conceptually speaking, so long as your local machine can still render the results. Play Skyrim and in the distance an avalanche happens. Wildfires could spread across windy terrain. Complex wildlife simulations can flesh out the forests. Volcanoes can erupt, spewing magma and ash and whatnot all over the landscape. Complex wind/weather scenarios can play out at extremely high fidelity. Floods, tornadoes, oceans, tsunamis, etc. Globally deformable terrain from these kinds of simulated natural disasters.

From a visuals standpoint (just because it's easiest to illustrate), I think that richness in sheer animation/movement variety could be a huge deal. I've long argued that the most visually apparent technical gains of this coming cycle will be in animation and physics simulations. So long as all those things are beyond the range of immediate player influence and interaction they would conceptually be able to be done in the cloud, which simply sends back to the console data to be rendered in the background of the frame.

How about using Kinect + the Cloud to do what Milo did for npc conversations, except instead of only 500 words for the AI to work with, it can have a massive database of phrases and triggers and whatnot? Personally I'm optimistic.




I don't know how often those types of physics effects animate locally, if or they're typically run lock-step with the framerate.

Not sure about flags per se, but it's plainly obvious other similar visual fx like fire are almost universally low framerate.
 
Maybe a good way to think about what I am saying is say computational recycling. In other words lets say you were letting your city run unattended for several hours, its highly likely that other gamers have created very similar setups and if that data has been stored the info for things like wear and tear, natural disasters and so on could be looked up and then inserted into your simulation at the appropriate times allowing those 5, 10 or 20 hours of simulation to be "simulated" with in part recycled computations.
Maybe I'm not interpreting this correctly, but it sounds like you think you can store and apply event and simulation data to every unique player instance across billions of pseudrandom events, millions of different cities with different geography and history, hours to days to weeks of playtime, billions of sequences of actions, and this is a performance and simulation improvement.
 
Really not impressed by that Ars Technica piece, it just sounds like so much handwaving. The idea of bringing 'pop in' to lighting as you await the cloud data is horrible. I'll need to see something much more substantial at E3
And this is the real crux of the issue, after Unlimited Detail and Caustic et al we are going to need to see things in action. There's no prize for being a true believer.

I'll believe this anything more than yet another example of 'everything is better in the cloud' bs that I deal with everyday in work.

Which afterwards is always followed by, "Why is everything so slow?" and "Man, this shit ain't cheap is it.". And yes, its every fricken day.

Oddly the comments on that piece threw up much better ideas than the MS rep such as pathfinding or some sort of AI offload. Lagginess is much less of a concern there as you can write it off as slow reaction times or some such, in fact I dare say it would make AI seem more 'real' than the disconcerting headshot in .001 second you get with some games

I'd just like a view that doesn't look like its shrouded in a Shanghai smog.
 
It's both smart and the future of gaming anyway so I am glad they are heavily investing in this model.

Just what future are we talking about. How much bandwidth do i need to get my cloud renders onto my machine. What exactly can the cloud do more effectively than millions of consoles with 5gb of ram can't do.

Imho the cloud is good for offloading anything that doesn't require quick rendering. Mail for example. Everytime I actually use a cloud solution for something that requires cpu resources I am usually bound by what ever cpu I ended up on at the server camp. Clouds are strong at massive amounts of data that requires many cpus working in parallel. I know that my imagination is limited but unless there is a whole new genre invented I can't see how or where the cloud solution should come into play.

And as i said before, if this was the answer to low cpu and gpu power, someone would have thought of it before.

Microsoft has some of the smartest people maybe they simply out thought everyone else.
 
Last edited by a moderator:
They designed a balanced machine that is competitive without the cloud for a few yrs, a potential show stopper with the cloud, and eventually can act as s thin client as they move to full on cloud rendering a la OnLive once the infrastructure is there and ready for it (not nearly as far off as ppl presume imho).


absolutely agree

that's why there is so much outrage; they are changing the industry and rather than everyone feeling they are changing it for greed they are changing it for need.

The industry will lose to the mobile devices if not and old school gaming will end up being a retro-niche or a smaller industry than we see now. Ms plans to be off in the wilderness setting new Cloud standards rather than caught flat footed in a shrinking market.

the rest of gamers-only (and I still play every day so technically I am one) will come along kicking and screaming until they get their bottle and see how everything is alright and maybe even better

this is the teething time. ;)

listening to that Engineering talk with Greenwalt showed me they are all in on cloud computing, looking forward to the results
 
absolutely agree

that's why there is so much outrage; they are changing the industry and rather than everyone feeling they are changing it for greed they are changing it for need.

The industry will lose to the mobile devices if not and old school gaming will end up being a retro-niche or a smaller industry than we see now. Ms plans to be off in the wilderness setting new Cloud standards rather than caught flat footed in a shrinking market.

the rest of gamers-only (and I still play every day so technically I am one) will come along kicking and screaming until they get their bottle and see how everything is alright and maybe even better

this is the teething time. ;)

listening to that Engineering talk with Greenwalt showed me they are all in on cloud computing, looking forward to the results

For me as a consumer this makes me uneasy though. Our purchases (ownership, accessibility, features, experience) become more and more dependent on networks, and hence more dependent on the institutions that maintain/own/control these networks.

The time when we owned and did whatever we wanted with a product once we purchased it will be long gone in a few years. In the future we will be purchasing just the access (often a temporary one) which will be subject to conditions/contracts

We wont be having real ownership of our purchases. Just a virtual one. We get a tiny glimpse of that already with our digital purchases and online subscriptions
 
It looks like MS is telling devs they'll have 3 times the xb1's resources in cloud computing for every xbox one sold. via OXM UK

"We're provisioning for developers for every physical Xbox One we build, we're provisioning the CPU and storage equivalent of three Xbox Ones on the cloud," he said (Jeff Henshaw, group program manager of Xbox Incubation & Prototyping). "We're doing that flat out so that any game developer can assume that there's roughly three times the resources immediately available to their game, so they can build bigger, persistent levels that are more inclusive for players. They can do that out of the gate."

Dedicated resources for every dev, but how many devs will actually tap into that. Will the software automatically know how to split the tasks into latency sensitive/insensitive tasks for console vs cloud computing, or would the devs have to explicitly state what gets sent to the cloud? I hope we get some answers at E3 or sooner.
 
It looks like MS is telling devs they'll have 3 times the xb1's resources in cloud computing for every xbox one sold. via OXM UK



Dedicated resources for every dev, but how many devs will actually tap into that. Will the software automatically know how to split the tasks into latency sensitive/insensitive tasks for console vs cloud computing, or would the devs have to explicitly state what gets sent to the cloud? I hope we get some answers at E3 or sooner.

Its clearly not meant for graphical tasks, they only mention CPU resources in there and not GPU resources, it will come in handy for some things though :).

Assuming same clock speed as PS4 thats ~306GFLOPS of cloud processing power.
 
Last edited by a moderator:
Back
Top