Server based game augmentations. The transition to cloud. Really possible?

What I hate is that example he gave of SSAO, being less for a couple seconds when you enter a room then loading in the good stuff from the cloud...

Whatever the solution is lets not do that. I think I'd rather wait out a 2 second load than that.

Depends how well devs can hide that transition. They have gotten good at that sorta thing after all with mip maps and now tessellation etc. Conceptually that transition aspect shouldn't be that tricky for them I wouldn't think.

Btw, Matt Booty from MS did explicitly mention MS's ambitions to make this the last console you ever need to buy. You may recall that bullet point from the Yukon leak which is what you seem to be referencing in your other post there. What is interesting is that for Yukon they had aimed to avoid any cloud stuff at all in 2013 and only talk up local streaming of content to devices around your house and saving real cloud stuff for the full scale bit in 2015. Seems they found this middle ground approach and are running with it in the interim to a fully viable OnLive alternative.
 
the idea of having hundreds of thousands of instances that require more resources than Durango is going to be a far-off idea.

Maybe in Bagel's example, but in general the idea is pretty simple, and it's pretty easy to have multiples of cloud processing power for every local.

Out of ~30 million x360's sold in the USA by NPD, how many are turned on and playing right this second? 1 million?

If you were to dedicate 4 X360's of processing power to each of those units in the cloud, you still only need 4 million x360's of processing power in the cloud, nowhere near 120 million. Plus, 360 is old now and 4X 360 is only 1tf of compute power. One day Xbone will also see this type of benefit (since it's no monster even today)

Cloud is really a more efficient way to allocate computing resources because it fits demand much closer.

I'm sure there will be growing pains to say the least. Will be quite interesting. But any amount of offloaded processing can help. Even if initially you "only" offer half a teraflop per Xbone, that's still significant. 1 tflop? You've just doubled the processing power. You dont need the 4X figure to matter (although I doubt the 4X figure is out of reach especially since they called it out as working realistic)
 
Maybe in Bagel's example, but in general the idea is pretty simple, and it's pretty easy to have multiples of cloud processing power for every local.

Out of ~30 million x360's sold in the USA by NPD, how many are turned on and playing right this second? 1 million?

If you were to dedicate 4 X360's of processing power to each of those units in the cloud, you still only need 4 million x360's of processing power in the cloud, nowhere near 120 million. Plus, 360 is old now and 4X 360 is only 1tf of compute power. One day Xbone will also see this type of benefit (since it's no monster even today)

Cloud is really a more efficient way to allocate computing resources because it fits demand much closer.

I'm sure there will be growing pains to say the least. Will be quite interesting. But any amount of offloaded processing can help. Even if initially you "only" offer half a teraflop per Xbone, that's still significant. 1 tflop? You've just doubled the processing power. You dont need the 4X figure to matter (although I doubt the 4X figure is out of reach especially since they called it out as working realistic)

One benefit is that Microsoft naturally already has a ton of compute ready to go for its Windows Azure customers. It's not as if this stuff is only for Xbox One so financially it is not as risky to invest in it as it would be for Sony to try and pull off the same thing.
 
Depends how well devs can hide that transition. They have gotten good at that sorta thing after all with mip maps and now tessellation etc. Conceptually that transition aspect shouldn't be that tricky for them I wouldn't think.

Btw, Matt Booty from MS did explicitly mention MS's ambitions to make this the last console you ever need to buy. You may recall that bullet point from the Yukon leak which is what you seem to be referencing in your other post there. What is interesting is that for Yukon they had aimed to avoid any cloud stuff at all in 2013 and only talk up local streaming of content to devices around your house and saving real cloud stuff for the full scale bit in 2015. Seems they found this middle ground approach and are running with it in the interim to a fully viable OnLive alternative.

whats weird is maybe i misread it but there's a vibe coming out of ms of "our box isn't powerful enough, quick talk about the cloud!"

If you know already, this quickly months before release even, know lack of power is an issue, why didn't you do something about it?
 
The real critical part here is: what is the cost to the Developers?

Can you really see a Developer Offloading graphical or AI stuff to the cloud, if it means they're tethered to some expensive hosting plan every month? For a little extra eye candy? Ya right.
 
But any amount of offloaded processing can help. Even if initially you "only" offer half a teraflop per Xbone, that's still significant. 1 tflop? You've just doubled the processing power. You dont need the 4X figure to matter (although I doubt the 4X figure is out of reach especially since they called it out as working realistic)

Yeah, the fact they said 'rule of thumb' at that part makes me think they know enough about their aims here to have done the relevant calculations. This is now two different execs who have given this figure from halfway across the globe. Clearly this is something they are serious about. It's fun seeing all the Sony fans get bent outta shape over it and asserting it's all marketing bullshit and none of this will ever happen though. Clearly they feel like they awoke into a bad dream or something. :p
 
The real critical part here is: what is the cost to the Developers?

Can you really see a Developer Offloading graphical or AI stuff to the cloud, if it means they're tethered to some expensive hosting plan every month? For a little extra eye candy? Ya right.

Simplest and cheapest technical solution usually win.

For a server-based solution, the user may have to pay per-use, subscribe to the service, or yield their privacy.

Right, but if it were a dynamic player created world you'd need to generate new lightmaps after the changes.

A photorealistic SimCity with complete simulation of millions of citizens (not a fudged simulated amount) and realistic traffic AI is something I'd like to see come out of cloud based game. But I know it won't. This over any other singleplayer turned MMO+ based bs.

User generated level ? Probably will have restrictions because they can't control what the users will do.

The thing about server power is the data should be dynamic. If it's pretty predictable, "we" can precompute/approximate them and be done with it. The new consoles should have room for the data.
 
Someone at TXB found this. It's an interesting read on the topic from MS's own research teams. :)

http://research.microsoft.com/pubs/72894/NOSSDAV2007.pdf

May be I'm not reading carefully, but I can't seem to find the scores in their experiments. I see the average score (kills) difference between bots in about 90 minutes, but I'm not seeing the average scores. To understand how big an improvement their technique brings, I'd like to see the overall scores. 20 vs 40 is very different from 200 vs 220 despite having the same score difference.
 
The cloud servers may be individually specced to be much more powerful than a console, but they are going to be vastly outnumbered.
A running simulation is a constant demand task that really cuts into their ability to take advantage of idle periods to sneak time slices to other instances.

Why couldn't they run the calculations and store the results somewhere and fetch that data? I would image that some of the data could be stored not actually computed real time... My initial thought is the claims seem dubious on the surface but if they can harvest the results of computations which essentially repeated thousands of times by different users the servers wouldn't actually have to do so much of this real-time once they have the data stored in a table somewhere.
 
Why couldn't they run the calculations and store the results somewhere and fetch that data? I would image that some of the data could be stored not actually computed real time... My initial thought is the claims seem dubious on the surface but if they can harvest the results of computations which essentially repeated thousands of times by different users the servers wouldn't actually have to do so much of this real-time once they have the data stored in a table somewhere.

Might as well just download the data and store locally on HDD, or precompute and burn to BR disc.
 
Might as well just download the data and store locally on HDD, or precompute and burnt to BR disc.

Depends on the amount of data and it still would make sense to be able to share those tables of info regardless of where they are stored to multiple users. Hyping the cloud gives them the ability to justify the monthly fees because its hard to quantify and provides an answer to the specs deficit. This way MS calculates the answers one time and gets to get credit for it multiple times.

Again I think the cloud claims are dubious for lots of reasons, there is no way they are going to have all these servers set aside to work on your game at the moment in time you are trying to play but if that data is stored and fetched on tables it would allow them to virtually compute for many users. This is the only thing I can come up with that would explain some of the outrageous claims they are making right now.
 
The way I understand it is that it cant really help with any latency sensitive stuff, players actions and so forth, so in other words its pre baking things like lighting or handling ai in the distance that doesn't really mater because it doesn't effect the player.

All these things normally get very low resources because they not immediately important to the player and I thought next gen was going to be about dynamic lighting and physics calculations.

I dont know it sounds like distraction marketing where they trying to butter people up for always online and that there machine is weaker than the competition kind reminds me of sony in 2006.
 
How the Xbox One draws more processing power from cloud computing - http://arstechnica.com/gaming/2013/...s-more-processing-power-from-cloud-computing/

"One example of that might be lighting," he continued. "Let’s say you’re looking at a forest scene and you need to calculate the light coming through the trees, or you’re going through a battlefield and have very dense volumetric fog that’s hugging the terrain. Those things often involve some complicated up-front calculations when you enter that world, but they don’t necessarily have to be updated every frame. Those are perfect candidates for the console to offload that to the cloud—the cloud can do the heavy lifting, because you’ve got the ability to throw multiple devices at the problem in the cloud."

Sound intriguing, though i am having a hard time seeing this become a reality. But is it feasible? Is it multiplayer only? Is this something for Single Player games? How does my game run without internet connection? How does it run on a slow connection? How much data would need to be transferred, afaik some of the stuff that is mentioned is pretty heavy on memory and storage? How does this handle 60 million consoles if very successful games use this feature? Variable quality? 3rd party developers that develop for PC/PS4/ONE would have to invest extra resources to get this up and running. Does the publishers pay a cloud fee to Microsoft?

I could be wrong, but it really feels like Azure was the only thing they could pull out of the hat to counter the power deficit towards the PS4. And nothing is stopping Sony, Nintendo or PC games from doing the same.. except lack of cloud computing experience. Strictly speaking, if this is a valid way to do games, Google should be able to deliver ground breaking next next gen graphics in Browsers.
 
I could be wrong, but it really feels like Azure was the only thing they could pull out of the hat to counter the power deficit towards the PS4. And nothing is stopping Sony, Nintendo or PC games from doing the same.. except lack of cloud computing experience. Strictly speaking, if this is a valid way to do games, Google should be able to deliver ground breaking next next gen graphics in Browsers.

Well, Gakai DID run demos of titles like Crysis 2 in a browser pretty much IIRC. onlive wasnt browser based but not far from it really. it had problems of lag and most notably crap picture quality (which they improved in a update i didn't try, but i'm not sure how much), but it was also a different paradigm than ms speaks of.

Yeah, if it works it changes everything, and I'm not so sure in a good way. simply because i like ownership in a piece of hardware and that starts to take that away. i dont mean in a drm fashion, i mean that i like owning a plastic box of fixed capability. i believe crytek touched on this desire among the hardcore as an issue with cloud.

at the end of the day the core want to own and take pride in that monster pc or even a console and thats a barrier to cloud.

nothing's stopping sony and nintendo from doing the same thing eventually, but in nintendos case it seems laughably unlikely they could pull it off even if they wanted considering a simple thing like wii u os runs like molasses and they are not a forward thinking, internet centric company at all.

there could also be a large investment in servers required that those companies dont have as well as experience with them, etc that ms does with azure. sony might be able to copy it eventually, but you might be looking at years.

but for now we've seen no real fruits of this from ms, i'm super super curious about it. i wonder if anything would happen at e3 but i doubt it.
 
I don't see how this stuff is all that difficult to grasp for you ppl. World simulation, physics and animations, global AI, and certain lighting aren't sensitive to latency.

I don't understand why physics isn't sensitive to latency. You want to see tress start falling down a full second after you've chopped them off? Or stuff running through the ground until the server tells your console "oh no it should have interacted with the ground"?

Unless we're talking about physics in another multiverse I don't think that will work well.
 
As I said.
Firstly, your suggestion was computing on idle XB1s, not on dedicated servers. Secondly, the PR face of the concept doesn't tell us what is going to happen, which is an investigating (this thread!). eg. Ken Kutaragi talked about the power of networked Cell's processing video content to make it better. Never happened. Wild claims of 4x XB1 power for every console is great to motivate the faith of the fanboys, but it shouldn't be taken at face value. As a technical forum we are well positioned to question the viability. Similar discussions looked at Unlimited Detail and said it was crock, and was right; looked at 8GBs of GDDR5 in PS4 and said it was improbable, to be proved wrong; doubted the realism of various PR claims by OnLive, which were substantiated though with OnLive proving us wrong in some ways too; and discredited the notion of more powerful hardware in Durango and Wii U in which the consensus on this board was completely right.

It's not about being right or wrong per se. The interest here is technical discussion and investigation and seeing if sound reason can predict the future.
 
I don't understand why physics isn't sensitive to latency. You want to see tress start falling down a full second after you've chopped them off? Or stuff running through the ground until the server tells your console "oh no it should have interacted with the ground"?

Unless we're talking about physics in another multiverse I don't think that will work well.

Until we get to a point where we can agree on what the expected range for latency and bandwidth requirements are, its going to be difficult for anyone with the development knowledge to respond to things like this. I've read that in the worst of OnLive's "acceptable" scenarios, its 10 frames of end-to-end latency, which would be a 1/3 of a second.
 
I don't understand why physics isn't sensitive to latency. You want to see tress start falling down a full second after you've chopped them off? Or stuff running through the ground until the server tells your console "oh no it should have interacted with the ground"?

Unless we're talking about physics in another multiverse I don't think that will work well.

What about trees, flags or other items moving in the wind? Or waves? I have no idea what kind of bandwidth you'd need to pass data back and forth for those. If it were a game like Skyrim, maybe you have your characters cape/cloak animated locally, but you push the physics for all the flags flying outside the nearest castle to the cloud. If that animation is running on the cloud, they can just keep pushing the latest update to you and your client cat fit it into the next frame. I suppose it couldn't be totally asynchronous, because you need to tell the cloud when to stop processing a flag that's now out of view, or tell the cloud there is a new flag in view, but otherwise it would just be a data stream from the cloud to your client.

Just tossing that idea out there. Someone else might know better how much data that would require, making some assumptions about the quality of the effect, based on how many points on the surface can be deformed etc.

Maybe someone with more technical knowledge could fill in the details on how feasible this hypothetical scenario would be. That could be a starting point for a more interesting discussion. One flag animated by the wind, for example. Maybe assume 10 or 15 Hz. I don't know how often those types of physics effects animate locally, if or they're typically run lock-step with the framerate.
 
Back
Top