Server based game augmentations. The transition to cloud. Really possible?

I didnt think I needed to reiterate what others have said. Clearly, you do not understand anything that was on DF article. I imagine you are having a hard time understanding these concepts.

Here, it clearly explains why your theories are nonsense.

Trolls belong on the ignore list. If you can't offer actual points to discuss you'll be joining him.

If you are going to reply to my posts and make assertions about their content you get to volunteer to supply some rationale and evidence to support said assertions. You can expect to be called out when you fail to do so.

I'll pretend to be unaware of your post history and banning from TXB and give you the benefit of the doubt by explaining why your assertions are nonsense. You can reply here or to the post you quoted.

First of all, the scenario I described was interactive and dynamic, just not immediate. I'm not interested in DF's assertion that only non-interactive things can be done with latency insensitive tasks. That assertion is flatly wrong, as I've explained in that post you quoted without reading. What you CANNOT do is have latency sensitive tasks, like anything that has to update every frame or even every few frames. A slow moving ship with many seconds between player interaction and the expected outcome leaves a rather long window for rich new animation data to be computed/streamed in well before the actual collision takes place.

The player can still affect the initial conditions and as such it is still interactive. The player can still do things to either trigger or prevent the collision from happening (in which case the streamed anim data stores in RAM is just ignored/cleared). In that sense it is dynamic. And yet, it's a task that's extremely insensitive to latency. There are loads of large scale physics-based events like this that can be computed when the player is within a certain range and triggered upon request (or ignored when not triggered). It's not hard to think up such scenarios.

If you disagree you can explain why. Trolling or asserting I am wrong isn't contributing to the discussion, especially a technical discussion.
 
Maybe it's better to start trying to accumulate info from modern games that have accessible documentation.

KZ:SF Demo
75MB for animations
6MB for AI data
5MB for physics meshes (compared to 315MB for total meshes)

http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf


BF3 multiplayer levels
200MB-250MB for meshes (total, not just physics meshes)
Heightfields for terrain is procedurally generated (helpful?)

Here's a whole presentation on their destruction system btw:
http://www.slideshare.net/DICEStudio/siggraph10-arrdestruction-maskinginfrostbite2

Nearly all physics (especially things on multiplayer games) are sensitive to latency, same with the AI in multiplayer stuff when its near you. anything that is interacting with you cannot wait 5 or so seconds for the data to stream back and forwards.

The KZSF data might not seem like a lot, but when you take into account how slow upload and download speeds are on the internet and then processing time, you'll soon realise to even process and upload/download a 1/10th of any of that data is going to on average take over a second, this is not acceptable for most things.

I can only really see things that you can bake into the game without using the cloud and also stuff like AI when its far away that will be doable.

The BF3 destruction is a great example of something that CANNOT be done on the cloud.
 
I imagine you mean non interactive physics, destruction and animation? I see extremely limited application for those.

The results of interactions are not always immediate in games. You could do real simulations in the cloud, with complex physics for all the different elements and materials, just sending the essential data back to the console.

Your buildings or whatever will collapse in a more realistic manner.
 
The results of interactions are not always immediate in games. You could do real simulations in the cloud, with complex physics for all the different elements and materials, just sending the essential data back to the console.

Your buildings or whatever will collapse in a more realistic manner.

You could also argue what latency is acceptable for things that are currently processed within one frame. Physically you would expect that reaction to happen immediately, but would anyone actually notice if it were 100ms or even 200ms behind? Still, my expectations are fairly low for that kind of thing until I actually see a demo.
 
Nearly all physics (especially things on multiplayer games) are sensitive to latency, same with the AI in multiplayer stuff when its near you. anything that is interacting with you cannot wait 5 or so seconds for the data to stream back and forwards.

That's not true. What you can't have here is immediate display of results. I'm not talking about immediate display of results, I'm talking about the delayed and/or triggered display of results that had their initial conditions governed by player interaction.

The KZSF data might not seem like a lot, but when you take into account how slow upload and download speeds are on the internet and then processing time, you'll soon realise to even process and upload/download a 1/10th of any of that data is going to on average take over a second, this is not acceptable for most things.

You aren't uploading anything but player/world state info to establish the computation's initial conditions. And download speeds have been discussed already. I'm assuming 1MB/sec which is roughly in line with the average speeds in the US from 2012. And this isn't latency sensitive data. It's OK for it to take a second or five to compute and download when you don't need to have the results displayed until 10 seconds later.

I can only really see things that you can bake into the game without using the cloud and also stuff like AI when its far away that will be doable.

You see little by ignoring the ideas put forth by others.

The BF3 destruction is a great example of something that CANNOT be done on the cloud.

According to...? Yes, when you assume a priori it's impossible of course you will conclude it's by definition impossible. Again, read the posts I make before replying. You guys seem eager to assert no solution exists without actually addressing the ones being presented.
 
I've been trying to puzzle through the optimization space for using cloud instances for non-multiplayer uses, particularly at this early date and with computational resources less than an order of magnitude greater than the local client.
To start, let's say something physically present or visible is being simulated.

What are the traits of a phenomenon that can be best simulated over a remote connection?
We've already gotten several traits that come up often: non-interactive, outside of the player's influence, computationally intense relative to bandwidth consumption, latency tolerant, and potentially subtle.
Other likely conditions are things like being able to maintain consistency in the face of irregular timing or data delivery, or having very relaxed time demands.
The time of day example is something like this, where the cloud spits out the necessary data to disk, and the console tries to get to it when it can. Irregular data delivery would need to have a significant amount of variability in order to make things perceptible, since things like the sun have to traverse quite a bit of the sky to make things noticeable.

There seems to be a tension between a number of desired properties.
The more an element is outside the player's influence, the greater the likelihood of the emergence of an inverse relationship in terms of significance to the player, such as a gratuitous case of incredibly accurate rendering of grass blades half a mile away.

The simulation should be intensive enough to justify the offload effort, yet not require too much bandwidth to bring the result to the player.
This brings the question of how meaningful we can make that payload, which has costs in terms of the effort it takes to extract the content versus the worked saved in offloading, versus limiting the expressiveness of the supposedly superior simulation to fit all its glory through the hundred-mile bendy straw that is the Internet.

Consistency and economizing bandwidth creates a pressure to make results more approximate, and more readily blended or fudged. Precision can be sacrificed because adherence to the irregularly delivered results can lead to noticeable hitches or artifacts. Too long a time between simulation steps, and we can suffer a loss in accuracy--or accuracy that can't be arrived at by lesser means.

The mixture of wanting something to be complex and readily updated, but not beholden to any strict schedule is a curious one.
One idea would be a simulated water fountain in the background. It has water physics, mist, refraction, sound, and yet it only has a fixed impact on its surroundings and an output that generally doesn't care about other parts of the game world. It is complex, always evolving, yet essentially timeless relative to everything but itself. Hiccups in data delivery can be smoothed over for a bit, and it's not like the player can say a given gurgle is wrong anyway.

What balance can be struck between the complexity of the element, the frequency of simulation updates, how fudge-able the hiccups can be, and the remoteness of fountain to the player's area of concern?
Between player perceptibility, fault-tolerance, bandwidth, and computational load, there must be a number of inflection points where something is very complex, yet safely irrelevant, yet not too hard to smooth, but not so fudge-able and interactive enough for local work.
 
Very good points to consider 3dilettante. I'd suggest reconsidering any assumptions you may have about spatial distribution of non-interactive elements though. I originally assumed that maybe the best way to envision how this stuff could be played out with a toy model scenario was to imagine something like Skyrim where centered on the player was a great sphere. Within that sphere arrows could travel, spells could be cast, etc. That's what I called the 'sphere of influence' earlier in this thread. Within that bubble the player's actions can affect the world in an immediate fashion. Beyond it was the cloud.

As I thought about that assumption more I considered something more like GTA, where the region beyond this bubble wasn't some wide open world but rather a dense, bustling cityscape. As you walk down the streets of this cit you are surrounded by buildings that are static and non-interactive typically. If you were to attempt to go inside them to say, the 37th floor of an apartment building, it'd physically take minutes to get there in the game world. During the time it takes you to enter the building and traverse the stairs/elevator to the desired floor the game could be simulating AI behavior/schedules/etc in the cloud and streaming the data back to the console in time for that entire upper half of the building (lower half can be computed/streamed in anytime the player is within a few minute's walk of the building).

I bring this example up because it illustrates the possibility of having a spatially close element of a game world that can be rich and compelling and powerful in delivering believability to the city and yet still offers a significant window for sufficient cloud intervention to work.

It's also true that we should maybe be focusing on the more obvious payoffs. It's not so much about data flow as it is about the tangible concept of delivering a richer, more believable world. I've personally focused my thoughts mostly on high fidelity animations and physics as a result of considering this myself. I feel game rendering is more than adequate today to convey something real damn close to the Uncanny Valley in terms of tech graphics but to get across that valley you need more than photorealism, you need totally realistic movement more than anything else.

Some ideas I've had for very obvious things players would notice that can enrich the atmosphere of the game world while retaining high fault tolerance include:

Physics simulations involving time delayed results
Violent weather
Volcanic eruptions
Meteor strikes
Avalanches
Bridge collapses

Large scale, violent events like these are highly complex, visually dramatic, can be triggered after a delay without players noticing (depending on context), and aren't the slightest bit sensitive to the player state.

One more idea. If the game does a destruction computation in the cloud, say some remote detonation of a building, and needs to stream in deformed meshes to convey that destruction...tessellation could potentially play a meaningful role in helping the streaming process. You stream in a new displacement map for the local hardware to tessellate and displace into a highly detailed mesh. In fact, just about any deformable mesh could use something like that I'd wager. Just a thought.
 
According to...? Yes, when you assume a priori it's impossible of course you will conclude it's by definition impossible. Again, read the posts I make before replying. You guys seem eager to assert no solution exists without actually addressing the ones being presented.

The fact that the destruction physics had to interact with every player on the server as fast as possible. You cannot have it lagging for a couple seconds and then someone suddenly appears inside it and dies because the destruction lagged.


You see little by ignoring the ideas put forth by others.

You see little when you ignore everything said by anyone who disagrees with your opinion.
 
The fact that the destruction physics had to interact with every player on the server as fast as possible. You cannot have it lagging for a couple seconds and then someone suddenly appears inside it and dies because the destruction lagged.

Nobody is claiming it wouldn't need to be coded differently. I'm saying large scale physics events (like a building collapsing from a C4 explosion) could be programmed in such a way that the dev leverages that time delay by moving the computation to the cloud to make it a higher fidelity simulation. I'm not claiming you can fire a bullt and calculate enemy animations resulting from its impact upon their chest.

'Destruction physics' in a videogame encompasses far more than you seem to realize. It's not merely immediate real time interactive events.

Btw, I'm not talking about MP play at all. I listed MP map info for BF 3 only to note the size of certain typical assets in that particular engine. MP maps themselves should be small enough as is to not need a cloud computing component to augment the actual game world or map (I'd presume). Or if such cloud stuff did come into play, it'd be something shared by everyone playing like the major physics events that supposedly change the flow of combat on MP maps in the new CoD.

You see little when you ignore everything said by anyone who disagrees with your opinion.

You haven't countered *anything* I've said. You just asserted it can't be done without basis. When you attempted to levy a basis upon it you simply took something I wasn't speaking to and tried to dismantle that instead. :rolleyes:
 
You haven't countered *anything* I've said. You just asserted it can't be done without basis. When you attempted to levy a basis upon it you simply took something I wasn't speaking to and tried to dismantle that instead. :rolleyes:

Thats your opinion, whenever someone counters what you say all you do is throw back more and more information about how wrong they are. You cannot seem to fathom being incorrect and to be honest its annoying, even on subjects you clearly know nothing about you seem to think you have the right to tell of not only members, but also actual game developers on how wrong they are.

This mostly evident in your instant dismissal of the digital foundry article.

But to get back on topic. There is only so much power in the cloud don't forget, that being 300GFLOPS, 24 cores of a 1.6ghz Jaguar CPU. A lot of this stuff that is done on it is also probably possibly on GPGPU so I don't think we will see it making a great deal of difference to what you can see on screen, after all if its so latency insensitive whats to stop you dedicating some GPU time to each tick and working on it in parts?.

The place it will be useful imo is for example with AI among other things or large trading simulations but physics honestly seems like a place where you could do all of this work on the local system over a longer time period, after all its latency insensitive right?.
 
The simulation should be intensive enough to justify the offload effort, yet not require too much bandwidth to bring the result to the player.
This brings the question of how meaningful we can make that payload, which has costs in terms of the effort it takes to extract the content versus the worked saved in offloading, versus limiting the expressiveness of the supposedly superior simulation to fit all its glory through the hundred-mile bendy straw that is the Internet.

Global illumination is a candidate. Voxels in an octree are well suited for a wide spectrum of bandwidths, top nodes goes first and with most redundancy. Split based on temporal response tolerance (ie. muzzle flashes would be calculated/approximated on the client only).

High quality rendering of complex imposters is another.

Cheers
 
Last edited by a moderator:
Global illumination is a candidate. Voxels in an octree are well suited for a wide spectrum of bandwidths, top nodes goes first and with most redundancy. Split based on temporal response tolerance (ie. muzzle flashes would be calculated/approximated on the client only).

High quality rendering of complex imposters is another.

Cheers

What do you think about a crowd of AI all exhibiting complex looking behaviors, but that the player can't directly interact with or doesn't have a chance to interact with? Would that make sense for the cloud?
 
But to get back on topic. There is only so much power in the cloud don't forget, that being 300GFLOPS, 24 cores of a 1.6ghz Jaguar CPU.

Oh boy, you just love that dont you :rolleyes:

Nowhere did MS say a flops number, or that there wouldn't be GPU resources in the cloud, or anything. They simply said in one interview that three Xbones of CPU+storage would be provisioned as a minimum, but they never ruled out anything else. You sticking to it as some super hard set in stone rule is funny, but not unlike you.

In fact in the other interview guy clearly talked about GPU effects like lighting and SSAO being done in cloud.

They also said Xbone without cloud=10x 360 and Xbone with cloud=40x 360, So if I be super literal like you, I deem they have 3.9 teraflops in the cloud for every Xbone :rolleyes: They could not have been talking about CPU flops. Because x360 CPU=100gflops, and Xbone CPU=100 gflops, therefore if only talking cpu flops as you are, xbone cannot be 10x 360.. what now mr super literal?
 
Thats your opinion, whenever someone counters what you say all you do is throw back more and more information about how wrong they are.

Right...because god forbid I defend my own points with logic and example scenarios! :oops:

You cannot seem to fathom being incorrect...

Stop. I've stated at LEAST two or three different times in the last page or two alone that I've had thoughts on this issue and asusmptions there were wrong upon furhter inspection after others made compelling points. Spare me your troll posts. It's unwarranted and only serves to derail the thread.

...and to be honest its annoying

Here's what's annoying. Ppl like you levying ignorant personal attacks just because I disagree with your claims. I didn't insult you. I didn't attack you personally. I pointed out why I feel your arguments are wrong and mine are correct. It speaks volumes that you derived a purely emotional reaction to that.

...you seem to think you have the right to tell of not only members, but also actual game developers on how wrong they are.

What game developers? Jonathan Blow?! The guy who just blatantly makes shit up about X1's 300k servers out of nowhere? Ha! ERP has posted some of the same points I've made. Obsidian and Avalanche clearly agree with me as well. So again, what devs am I asserting are wrong that aren't compulsive liars in the first place like Blow? Name them.

What I 'seem to think' is that this is a technical thread on a new approach to game design, located in a discussion forum. As such 'I have the right' to tell you and anybody else I please how I fell. That is *the point* of a discussion forum. If you don't like my posts, ignore them. If you want to debate something or discuss something, feel free to do so but don't be so incredibly arrogant as to feign shock when your claims fail to impress me.

This mostly evident in your instant dismissal of the digital foundry article.

"Instant", eh? Are you sure I didn't raise a helluva lot of points thoroughly detailing my issues with their presentation and misleading structure? Are you sure I didn't go through it and give them their shaving of due credit? Are you sure I didn't outline valid reasons for calling the author out on an incredibly misleading article? No, you're just making up more bogus assertions to attack me. :???:

But to get back on topic. There is only so much power in the cloud don't forget, that being 300GFLOPS, 24 cores of a 1.6ghz Jaguar CPU. A lot of this stuff that is done on it is also probably possibly on GPGPU...

Stop. Again...so? Why should anyone care if it can be done locally? The simple fact that you can move it to the cloud to free up local resources and remove latent operations in the process is somehow a bad thing now? I'm confused as to how so many 'technical minded' ppl can't grasp this. If the GPGPU stuff is in the cloud, that's more rendering horsepower for the more typical GPU processing tasks. That's...good.

IMHO the most important aspect to great 'visuals' these days isn't rendering, it's realistic animation and physics. Believable movement is way more important to a convincing visual impression than ppl tend to realize.

The place it will be useful imo is for example with AI among other things or large trading simulations but physics honestly seems like a place where you could do all of this work on the local system over a longer time period, after all its latency insensitive right?.

AI could be a big deal too and even manifest itself in powerful visual aspects such as scripting animation behavior and npc routines in realistic ways or controlling how ppl drive cars in a GTA game. As for physics, why would you want to do such a thing in the first place? Why use GPU resources to compute things that are extremely computationally-intensive? How is that rational *if* it can be reliably done in the cloud instead and the local console just receives the results? :?:
 
Oh boy, you just love that dont you :rolleyes:

Nowhere did MS say a flops number, or that there wouldn't be GPU resources in the cloud, or anything. They simply said in one interview that three Xbones of CPU+storage would be provisioned as a minimum, but they never ruled out anything else. You sticking to it as some super hard set in stone rule is funny, but not unlike you.

In fact in the other interview guy clearly talked about GPU effects like lighting and SSAO being done in cloud.

They also said Xbone without cloud=10x 360 and Xbone with cloud=40x 360, So if I be super literal like you, I deem they have 3.9 teraflops in the cloud for every Xbone :rolleyes: They could not have been talking about CPU flops. Because x360 CPU=100gflops, and Xbone CPU=100 gflops, therefore if only talking cpu flops as you are, xbone cannot be 10x 360.. what now mr super literal?

While they didn't specify anything that can help us get a flops number, they *did* in fact say the cloud resources are there for CPU and RAM allocations. So no GPU. ;)

The figure from that Aussie PR guy was just that, PR. Multiple other MS execs have since clarified.
 
While they didn't specify anything that can help us get a flops number, they *did* in fact say the cloud resources are there for CPU and RAM allocations. So no GPU. ;)

link? (and it seems every time i step up and demand a link, it is found wanting :p )

anyway, even if that is true it's still nice. think betanumerical wouldn't be touting it as a pretty big deal if ps4 had a virtual 32 core cpu vs xbone's 8? lol

that's 400% more by my math :p
 
I personally wouldn't use Jonathan Blow as an example of much of anything. The guy just seems so full of himself, it's incredible. He had some incredible success, and now he sees an opportunity to try and capitalize on that by being a loudmouth. Shame, too, because I really loved Braid, but that's the last time I ever support anything he does.
 
Back
Top