Server based game augmentations. The transition to cloud. Really possible?

It's possible, certainly, but you then run into the same situation that optional things are pretty much never supported outside of a small minority of games on console. And even when supported by games targeted at the optional "thing", that will at some point have support abandoned for it due entirely to it being optional. And when not abandoned, it remains largely a fringe benefit with the majority of games not supporting it (Move for instance), and those that do often times not supporting it well.

In other words, the cloud could have been successful because it was standard. Having a second console do something similar would never take off due to it being optional, and optional in a way that it's highly unlikely that more than a very small minority of owners would have 2x of the same console. Hence, it would receive basically no support outside of the initial proof of concept games. Move could have become an amazing controller supported in the majority of games, IF Sony had made the move to include it in every single PS4 as a standard controller.

So, yes, it could be done. But won't ever be done.

Regards,
SB

Ok, so technically it does not seems to be a problem for this..?
because my wondering is, perhaps this is something that does not need to be mandatory but "could work" even if its optional?

Is it possible to code games with "if-"situations? So for example.. "IF a second console is connected, offload X,Y,Z to it" etc etc?
So games would be coded to what the standalone machines are capable of but COULD get improvements if a second unit is connected, improvements like we see in the PC space today by having a monster GPU etc etc?

If something like this could work, then obviously it would not be something everybody would do, but I could imagine a scenario where many people would get a second unit just for improving gfx.
And when the price of the machine goes down, even more people would double dip (IF the improvements are there).

Sure, this would never be standard, but if games can be coded with this in mind and that the OS is prepared for this, to make it as easy as plug and play, then at least it would make more people more curious.

At least, this is what I speculate :)
 
Ok, so technically it does not seems to be a problem for this..?
because my wondering is, perhaps this is something that does not need to be mandatory but "could work" even if its optional?

Is it possible to code games with "if-"situations? So for example.. "IF a second console is connected, offload X,Y,Z to it" etc etc?
So games would be coded to what the standalone machines are capable of but COULD get improvements if a second unit is connected, improvements like we see in the PC space today by having a monster GPU etc etc?

If something like this could work, then obviously it would not be something everybody would do, but I could imagine a scenario where many people would get a second unit just for improving gfx.
And when the price of the machine goes down, even more people would double dip (IF the improvements are there).

Sure, this would never be standard, but if games can be coded with this in mind and that the OS is prepared for this, to make it as easy as plug and play, then at least it would make more people more curious.

At least, this is what I speculate :)

It's certainly possible, but not very probable. It all comes down to economics.

If the investment required is higher than the potential return then it no one will do it.

For example, on PS2 it was possible to link 4x PS2's together to have one game render to 4x screens for a very large screen display. How many games supported that? :)

Let's say 8 out of 10 consoles owners have the optional "thing," whatever it is. That might get implemented, there's a reasonable chance you could sell a copy of a game to one of them that would not have gotten sold otherwise. 5 out of 10? Now it's starting to get iffy. Will that additional support for the option "thing" get enough people to buy it that wouldn't have bought the game otherwise to justify the money and time investment to implement it. 2 out of 10? Highly unlikely. Less than 1 out of 10? Not likely other than a proof of concept by the console manufacturer, and most definitely not by any third parties.

Now if it was mandatory, 10 out of 10 would have it, and you'd definitely see developer's experiement with it. Anything less than that and you'll start losing developer support as they decide it might not be worth the additional cost, development time, etc. versus potential additional sales.

Regards,
SB
 
Question..

How feasible or possible is it to have locally augmentations on games?
Meaning, we move down the cloud to our room :)

Basically, what I mean is..
Having a second Xbox One at home, both connected through the ethernet connection.
One of these are only used for improving GFX/physics/whatever things.

Having it locally would mean that you dont need to worry about ping/net latency and other things.

Would this be possible?

I wouldn't want to do graphics on a seperate console communicating over 100MBIT or a 1GBIT link, they are both woefully slow in comparasion to the internal bus's.

Realsitically your going to get around 8MB/s out of 100mbit and around 80MB/s out of 1gbit.

Per frame at a 30FPS game this is a time amount of bandwidth for a 60FPS game its even worse.

Though this does no mean that you cant do other things on the other console which do no require as much bandwidth, but there are problems with that too as mentioned above
 
I wouldn't want to do graphics on a seperate console communicating over 100MBIT or a 1GBIT link, they are both woefully slow in comparasion to the internal bus's.

Realsitically your going to get around 8MB/s out of 100mbit and around 80MB/s out of 1gbit.

Per frame at a 30FPS game this is a time amount of bandwidth for a 60FPS game its even worse.

Though this does no mean that you cant do other things on the other console which do no require as much bandwidth, but there are problems with that too as mentioned above

wifi-direct improves this scenario http://www.ign.com/boards/threads/x...nderstated-feature-of-the-xbox-one.453139381/

Btw I expect a future illumiroom like device that communicates to the xb1 that sits on a Coffe table beaming a render target 'light' to the TV/wall to communicate via WiFi-direct
 
Last edited by a moderator:
That wasn't the point of those things.

The point was that something doesn't have to be interactive for it to be impressive and affect the feel or polish of a game. There are plenty of non-interactive things in games that require some amount of processing and would make the game feel barren and incomplete if they weren't there.

Regards,
SB

I agree, but I think parsing things in terms of 'interactive' and 'non-interactive' misses a lot of stuff that are interactive but delayed. You can even (presumably) pair those delayed interactive moments (where you trigger something) with fully interactive actions, like shooting a rocket at a building where the initial explosion/destruction effect is done locally but the building takes a few seconds to actually start to topple. That's a window where potentially the cloud can be augmenting the destruction parameters, doing computations, and send the results to the console for display.

I think if you even just take a single random screenshot of a game, odds are that you could probably find a TON of latency insensitive stuff going on in that scene. For instance, a game like watch dogs:

Watch_Dogs_13616540975099.jpg


Those gorgeous waves are physics driven, but the distortions as player-controlled boats drive through them is a local effect. All that static geometry bounding that channel there is non-interactive and since the waves are using some fluid simulation effect based on the shape of the bounding geometry that would mean the physics calculation for most of those waves doesn't need to be updated within a single frame. If you want to add ripples to the waves to spread outward after the player passes through the water, those ripples are time delayed as it takes time for them to propagate.

There seems to be a lot of stuff that adds to the dynamicism of a scene that isn't local until the player takes the time to walk up to it and do something.

I'd also add that the asteroid tech demo doesn't need to be pulling in 500k updates per second to account for all the asteroids at once. It takes some finite amount of time to move around in that kind of demo and you won't be viewing all of the asteroids in any given shot most likely. You can have delays in the updates. With the local box handling the near field objects you can gradually stream in the data from the far field objects in the time it takes you to actually move close enough to notice their dynamics.
 
Last edited by a moderator:
I wouldn't want to do graphics on a seperate console communicating over 100MBIT or a 1GBIT link, they are both woefully slow in comparasion to the internal bus's.

Realsitically your going to get around 8MB/s out of 100mbit and around 80MB/s out of 1gbit.

Per frame at a 30FPS game this is a time amount of bandwidth for a 60FPS game its even worse.

Though this does no mean that you cant do other things on the other console which do no require as much bandwidth, but there are problems with that too as mentioned above

Why are you directly comparing the internal bus to the network bandwidth? Doesn't make much sense to me, when in games the output is usually a lot less demanding memory and bandwidth wise than what is required to create said output.

The other console/server would still have lots of internal bandwidth working on the task, the slower bus would be usable to transfer a much more limited buffer.
 
EA's Frank Gibeau talks about cloud as part of a much larger interview. Sounds at least like they aren't dismissing it offhand Neogaf style

http://venturebeat.com/2013/07/24/f...-and-respawns-titanfall-interview-part-two/2/

GamesBeat: I was curious what you thought about that very geeky Microsoft cloud processing technology, where 300,000 servers in the cloud can do the AI processing for something like Forza. It sounds like, if possible, a whole new avenue for computing itself. It also brings back memories of Larry Ellison’s network computer or whatever. [Laughter].

Gibeau: Yeah. I’m not gonna comment on that. But the cloud architecture that they’ve talked about is something that we’re researching and looking at how we might implement it in our games. The idea that we could offload different aspects of the game to be processed elsewhere at scale is a powerful idea and it could unlock new experiences. You would completely re-architect much of the game as a result. We’re excited about it. We’re doing research as we speak. We’re looking at ways to productize it in our games.
 
Question..

How feasible or possible is it to have locally augmentations on games?
Meaning, we move down the cloud to our room :)

Basically, what I mean is..
Having a second Xbox One at home, both connected through the ethernet connection.
One of these are only used for improving GFX/physics/whatever things.

Having it locally would mean that you dont need to worry about ping/net latency and other things.

Would this be possible?

This raises another question. What happens in a home, roommate, or dorm situation with multiple xboxes trying to play titanfall or something else utilizing compute for more than AI pathing and matchmaking?
 
It depends. Very often peer to peer network can be more tricky to handle than regular client server setup. In a complex setting like a university, the network admins may set up policies on their router to contain/partition the traffic.
 
This raises another question. What happens in a home, roommate, or dorm situation with multiple xboxes trying to play titanfall or something else utilizing compute for more than AI pathing and matchmaking?

If a company was to sell a 'xbox +' then it would come with 2 ethernet ports. Plug one directly into the original console and the other into the network.

But my own feeling is that an 'xbox +' should be a full onlive/gaikai-style server rather than a 'compute resource'.
 
Seems like NVIDIA is doing some research into cloud usage for indirect lighting which is being presented at SIGGRAPH:

http://graphics.cs.williams.edu/papers/CloudLight13/

Fantastic find. That paper seems like a direct response to many of the questions posed by this thread. I hadn't thought about the amortization angle where multiple clients can be taking advantage of (and contributing their "cloud allowance" toward) the same set of lighting calculations.

The big disconnect here is that nVidia envisions a cloud of GPUs, while Microsoft's cloud is composed of CPUs, memory and storage space. Would a software (ie. CPU) version of these sorts of lighting calculations be worth the effort? I know that GPUs are vastly more parallel, but could several-to-many interconnected multi-CPU servers start to compete in that space? (On a per-watt or per-transistor or per-hardware-dollar basis?)
 
The only relevant one was the irradiance map solution (the Voxel approach is basically remote rendering and the Photon approach needs high bandwidth). This what they said about irradiance mapping in closing :

"Irradiance mapping requires the lowest bandwidth of our algo-
rithms, with latency lower than voxels due to utilization of client
computational resources. It also integrates easily into existing game
engines. Unfortunately, irradiance maps require a global geometric
parameterization. While decades of research have provided a mul-
titude of parameterization techniques, these do not address prob-
lems specific to global illumination: handling light leaking where
texels lie below walls or keeping world-space samples close in tex-
ture space for efficient clustering into basis functions. We see the
authoring burden of parameterization as one reason developers are
moving towards other techniques, e.g., light probes."
 
The big disconnect here is that nVidia envisions a cloud of GPUs, while Microsoft's cloud is composed of CPUs, memory and storage space. Would a software (ie. CPU) version of these sorts of lighting calculations be worth the effort? I know that GPUs are vastly more parallel, but could several-to-many interconnected multi-CPU servers start to compete in that space? (On a per-watt or per-transistor or per-hardware-dollar basis?)

Just curious, have Microsoft said if their cloud approach for video game use was purely cpu's rather than gpu's? I'd have thought for their purpose going with cpu's would be better to give them more general virtual computing resources anyways, but I wonder if they confirmed that.
 
It's not that they said it is puely cpu but basically the fact that their Azure backbione is pretty much all CPU based that may change in the future but for now it's what they have.
 
can this be done on azure? i looked up the specs for azure machines and i coudln't find anything but they do use amd processors http://harutama.hatenablog.com/entries/2010/10/30 it's on slide 8. but that's old. so its possible that they moved to like intel machines with really powerful gpus.

Ms has some big compute servers (I think that's their names) that are either 8 core machines with 60GB, or 16 core machines with 120GB. They have very high network bandwidth and apparently are super efficient.

I don't know if Ms have opened these servers to 3rd parties yet, but a preliminar test with 500 servers is even ranked at the top500 super computers and top50 in terms of efficiency.

I would say it can most definitely be done. Dunno if they have enough of those servers to make that feasible to millions of concurrent players, though.
 
Just curious, have Microsoft said if their cloud approach for video game use was purely cpu's rather than gpu's? I'd have thought for their purpose going with cpu's would be better to give them more general virtual computing resources anyways, but I wonder if they confirmed that.

Well, MS made those hand-wavy comments about how every XB1 would have access to the equivalent of three more XB1's worth of CPU and memory up in the cloud. I took that to mean that there might be a few hundred GFLOPS of CPU on tap up there, but definitely not several TFLOPS of GPU resources. They certainly would have made a big deal about that, if it had been the case. And of course, Azure wasn't built primarily for gaming/graphics. It's essentially a bunch of Web/DB/Application servers.

It would seem that Sony/Gaikai's idea of the "cloud" is much closer to that of nVidia's: Lots of GPU in the cloud, able to do actual (complete) game rendering, or failing that, at least do render-assist stuff like these "bonus" irradiance calculations that the nVidia paper talks about.

Sadly, I take the grand plans from Sony-Gaikai (great cyberpunk corp name!) that we've heard about with a grain of salt. Building that up from scratch, and then monetizing it, sounds like a tall order given Sony's resources.
 
Back
Top