Server based game augmentations. The transition to cloud. Really possible?

Shifty Geezer

uber-Troll!
Moderator
Legend
From what I gather reading here and there (haven't found a good XB1 summary yet), though MS aren't going with distributed computing as some theorised, they are going with server based game upgrades.

I see asked this back in 2006...

But here we are, for real. What will the end experience be like? How much economical processing power can a server bring to each user? If MS have 300k servers, what sort of processing power will that bring to each user? How do you manage fluctuating workloads? How to devs target jobs for their games on a server-based compute network?
 
my first idea to create a LOD-like area based on interaction latency with the player seems yet the best to me

microsoft is doing an HUGE HUGE HUGE investment with a giant cost going from 5.000 to 300.000 data servers in order to do serious cloud computing

one question is still without answer, is move engines or other hardware, specific to decompress a data stream that cames from the cloud?

really we know everything on move engines and data paths?




note that one ME does LZ decode (input) and one LZ encode (output)

move_engine112.jpg






from vgleaks

The canonical use for the LZ decoder is decompression (or transcoding) of data loaded from off-chip from, for instance, the hard drive or the network. The canonical use for the LZ encoder is compression of data destined for off-chip.
this makes a LOT of sense to me, as Xbox One is built with cloud computing in the mind.
even the virtual texturing and the tiling capacity of ME's could be thought for using with cloud computing
 
Last edited by a moderator:
There's some game logic you can move around, in some strategy games, the AI isn't run at frame frequency but a lot less often, rarely enough to move it to a server and wait for the answer to arrive w/o much problem.

A side effects could be more interactions between players ala Dark Souls.

As the Internet will improve more operations will be able to be moved server side.


Sounds nice in theory, but what about retro-gaming ?
How long before servers are shutdown and a game you liked but wasn't popular just stops working ?
There are side effects in that idea I'm really not comfortable with...
 
Well I'd guess that they would all need to use the same libraries for the elements that are to be enhanced. Wouldn't give the devs much leeway to do anything that differentiates them from each other other than the possibility of having additional resources of the console to work with.

But you'd never be able to guarantee that a constant rate of support would be available and that would surely mean that you would have to set aside some resources on the console to pick up the slack if the network fails or is congested.

I think it's a possible in large towns\cities where network connections are stable and bandwidth is plentiful. Move outside that comfort zone and I can see the reliability usefulness of a service like this dropping off pretty quickly.
 
How many processors does one server have?

I really think that we're talking about massive floating point calculations, so gpu based servers (AMD says 'hi'? or nvidia power houses, but nvidia cloud is for its console, so I don't think they helps the competition MS-AMD)
maybe based of an evolution of http://www.amd.com/us/products/workstation/graphics/firepro-remote-graphics/S10000/Pages/S10000.aspx

5.91 TF each, if you put 10 racks per server = 59,1 TF /server

59,1 TF server x 300.000 = 17.730,0 PetaFlops = 17,3 ExaFlops in the cloud

assuming S10000 in the racks and not a better evolution
 
Doesn't matter how many servers they add, they could add millions; that doesn't change the fundamental practicalities of how the internet works, and fact remains that just round-trip transport time for a packet across the net, best-case really, is about two video frames worth of latency on top of whatever latency you're already dealing with when running your game. Then remote processing on the cloud server comes on top of that, and actual data transfer on top of that. Lost packets en route? Shit, more delays. Packets arriving out of order? Means you need to receive all of them before you can sort them all out and start processing. And what latency isn't top notch? If you're on 3G or 4G cellular (or your regular connection is simply having a bad day, your kid/spouse runs bittorrent while you game etc), ping might not be ~35ms, but rather 350, that's not roughly two frames but rather over twenty...

If you're making an action game this will never work for anything useful. It's too laggy, you run the risk either of synchronization issues where the player tries to do something that would conflict with whatever results of the (not yet received) cloud computing computation, or the game constantly holds the player back, meaning playing feels laggy and stuttering. Or cloud computing can only affect objects sufficiently far away from the player that you can't interact with or affect them, meaning it's all pointless anyway.
 
is about two video frames worth of latency on top of whatever latency you're already dealing with when running your game.

Xbox Live will not send frames as gaikai or online does, sends compressed data over-ip about ia, physics and rendering (mostly lightining I guess) so there is no sense on "frame latency"
 
I really think that we're talking about massive floating point calculations, so gpu based servers (AMD says 'hi'? or nvidia power houses, but nvidia cloud is for its console, so I don't think they helps the competition MS-AMD)
maybe based of an evolution of http://www.amd.com/us/products/workstation/graphics/firepro-remote-graphics/S10000/Pages/S10000.aspx

5.91 TF each, if you put 10 racks per server = 59,1 TF /server

59,1 TF server x 300.000 = 17.730,0 PetaFlops = 17,3 ExaFlops in the cloud

assuming S10000 in the racks and not a better evolution

The S10000 is ~3600/card. add in at least 400 worth of other stuff. $4000 per rack.

10 racks / server = $40,000 a server.
300,000 * 10 = 300,000 * 40,000
$12000000000

Thats 12 Billion, for the hardware alone and thats probably low balling it.


Xbox Live will not send frames as gaikai or online does, sends compressed data over-ip about ia, physics and rendering (mostly lightining I guess) so there is no sense on "frame latency"

There will always be frame latency, you have to compute lighting each frame.

Physic interactions using this will be delayed by however long the results take to come back (and as he said 4 frames).

You cannot magic away the latency.
 
The S10000 is ~3600/card. add in at least 400 worth of other stuff. $4000 per rack.

10 racks / server = $40,000 a server.
300,000 * 10 = 300,000 * 40,000
$12000000000

Thats 12 Billion, for the hardware alone and thats probably low balling it.

nope, this was the price for single sell to public customer in 2012

as I don't hink ams sells the APU for 4-500$ each, but for 50-50$. I believe that the business between amd and Microsoft are deep than this,

I guess 200-400 $(6X the apu)

so 2000-4000$ per server
600 Milions - 1,2 Bilions for 300K servers.

easily affordable


There will always be frame latency, you have to compute lighting each frame.

Physic interactions using this will be delayed by however long the results take to come back (and as he said 4 frames).

You cannot magic away the latency.

you'll be the very same latency of always, ps4 included
even if you ignore when me, silent buddha and others say it, low interaction area near the player can be easily computed by the console, different things in the rest of areas, via cloud computing

so no, there's no more latency, actually there's LESS latency because the mission critical area have all the console power at its needs.
 
nope, this was the price for single sell to public customer in 2012

as I don't hink ams sells the APU for 4-500$ each, but for 50-50$. I believe that the business between amd and Microsoft are deep than this,

I guess 200-400 $(6X the apu)

so 2000-4000$ per server
600 Milions - 1,2 Bilions for 300K servers.

easily affordable

It costs AMD more then $50 to make them, they would be losing hundreds on each. This is not happening.
 
The S10000 is ~3600/card. add in at least 400 worth of other stuff. $4000 per rack.

10 racks / server = $40,000 a server.
300,000 * 10 = 300,000 * 40,000
$12000000000

Thats 12 Billion, for the hardware alone and thats probably low balling it.

There will always be frame latency, you have to compute lighting each frame.

Physic interactions using this will be delayed by however long the results take to come back (and as he said 4 frames).

You cannot magic away the latency.

Server tec is super expensive, ECC RAM, server HDD in RAID configurations, cooling...we just got an offer at our institute for a server with overall piss poor performance that costs 10.000 Euros.

300.000 server is a brutal number, and MS really invest a lot of money into this...if this would be a rumor before the reveal, I would shout BS without a doubt.

I know a lot of high performance computing folks and participated lots of minisymposiums about this subject...what is clear: the initial investment, i.e. the price of the machine itself, is the smaller part of the overall costs.

This seems to be a major MS business decision with substantial investment and risk.

Edit: thinking about it a bit more...in contrast to HPC, you do not need high price networking which is as costly as the computing hardware, but you have to deal with hardware failure and all this stuff. With 300.000 servers, I guess you need high ratio of redundancy to cope with hardware failure.
 
So, what gaming tasks can be realistically computed in the cloud?

I guess we all agree that rendering task is not realistic.

We heard bkillians idea about NPC AI for online games.

What else?
 
Xbox Live will not send frames as gaikai or online does, sends compressed data over-ip about ia, physics and rendering (mostly lightining I guess) so there is no sense on "frame latency"
That's not technically accurate. Anything you transmit over the net will have latency (because of limitations like the speed of light if for no other reason - which there are many of I might add), and also be subject to packet loss and so on. What kind of delivery mechanism do you think MS would use that is immune to latency, magical fairies that arrive instantly at your XB1's network port to dump off their data packets? :p

And if you think XBL would send "compressed data", which is not frames, but then speculate it's "lightning" (you mean lighting I suppose), well, unless you actually render the lighting you really can't send any of it as it exists only as a shader program(s) and texturemaps in your videocard's on-board memory until it's rendered. You can't pre-calculate lighting without rendering it and sending it as a whole frame.
 
this is the most stupid request ever.
would you like the telephone number of bill gates too?

others non-sense request?

Well that's a bit harsh, but what about you give examples of things you know/expect to work when being moved server side ?

The only one I know for sure could work is AI in some RTS games since it might be run at low frequency, what else do you have in mind ?
(In details please, so we can construct our own mental images of the process and its consequences.)
 
That's not technically accurate. Anything you transmit over the net will have latency (because of limitations like the speed of light if for no other reason - which there are many of I might add), and also be subject to packet loss and so on. What kind of delivery mechanism do you think MS would use that is immune to latency, magical fairies that arrive instantly at your XB1's network port to dump off their data packets? :p
I think you reasoning too much inside the box, and getting hung up by Xenio's poor choice of words (who's a non-native English speaker). Precomputed irradiance volumes and the like are a modern lighting technique. There is certainly scope for, instead of rendering these offline and saving precomputed data on the game disk, rendering them online and downloading the results. This way you could support dynamic day cycles with full environmental GI, with lighting calculated on servers way ahead of the immediate frame requirements and cached on the consoles ready for use in several minutes time. The game would then render dynamic lighting and shadowing locally.

And note, I only use the term 'irradiance volumes' as a loose reference the idea of a lighting solution that is effective and computationally expensive and not requiring immediate rendering.

So something like Elder Scrolls or GTA could be precalculating lighting data ('rendering lightmaps' in Xenio's terminology) for the user based on their game world data for their immediate location.

There are numerous technical considerations to the effectiveness of this strategy which is what we should be talking about here.
 
I personally think that Microsoft was betting on server based game augmentations, resulting into an always online console. But they had to scrap always online due to backlash. I think that's why they went with a relatively modest 1.2TF setup, to back it up with cloud computing.
 
You still need to send all the scene data to the data centre, and if you do this per frame your going to behind by 1/2 frames which means your lighting is going to lag behind everything else and that won't look good.
For realtime dynamic lighting, you're right. But for environmental lighting, stuff that this gen has been prebaked into the game assets, you don't need such accuracy. You can be out by several minutes of a daylight cycle and the player won't notice.

Another approach would be to prerender 50 different times of day for the whole map and interpolate between two, which would need crazy amounts of storage. That's something that servers could provide, but that'd just be data serving and won't need massive calculations per player. That might even be more what MS have in mind, as it's certainly more manageable! If their description of their service was subjective (best looking games, 300,000 servers), then they may be thinking best ever on-screen results do to most amount of (precomputed) data, rather than most amount of realtime shading.
 
For realtime dynamic lighting, you're right. But for environmental lighting, stuff that this gen has been prebaked into the game assets, you don't need such accuracy. You can be out by several minutes of a daylight cycle and the player won't notice.

Another approach would be to prerender 50 different times of day for the whole map and interpolate between two, which would need crazy amounts of storage. That's something that servers could provide, but that'd just be data serving and won't need massive calculations per player. That might even be more what MS have in mind, as it's certainly more manageable! If their description of their service was subjective (best looking games, 300,000 servers), then they may be thinking best ever on-screen results do to most amount of (precomputed) data, rather than most amount of realtime shading.

This seems more realistic to me, but how much bandwidth would such a solution use and how often would it need to be updated? assuming 1080P fidelity would such a data transfer be rather large?.
 
Back
Top