Xbox Scarlet Hybrid Game-Streaming Version, Kahawai

Interesting notion, but then you're having to render/run the game in multiple instances, increasing workload. Instead of rendering the current frame, you render four possible frames, requiring four times the processing power. That's insanely inefficient!
 
Just a reminder that Xbox Scarlet is said to have two versions, one a normal gaming console and another that is for hybrid streaming. This thread is for the streaming technology.
 
Closer datacentres means less jumps for your data, but it still has to pass through local ISPs and local routing where you can't be sure of packet delivery times. There's no way to guarantee low-latency packet delivery short of dedicated connections to the servers that bypass the Internet infrastructure, which isn't plausible.

Yeah but local Data centers is a must before you even get to the other problems present.
 
Yeah but local Data centers is a must before you even get to the other problems present.
Not quite, even in countries with relatively long distance hops to a datacentre you can get better cloud performance for Office 365 by paying for ExpressRoute, which is a dedicated non internet based line to MS datacentres. Most large deployments go down this route to avoid the issues Shifty highlighted with crappy ISPs, random router configs, and other assorted nonsense over the public internet

Edit
iroboto said:
I suspect some form of tunneling or VPN will be used for this service to bring down latency.

Reconstruction will also play a role here. I agree on the use of ID buffer or even a more advance form of it.

VPNs can't solve your issues with public internet as they still traverse it but encrypted, especially for cloud based PBX solutions (the most immediate example of a latency sensitive app I can think of on Azure right now) the recommended path is to get ExpressRoute over the standard VPN connection
 
Last edited:
I suspect some form of tunneling or VPN will be used for this service to bring down latency.

Reconstruction will also play a role here. I agree on the use of ID buffer or even a more advance form of it.
 
Not quite, even in countries with relatively long distance hops to a datacentre you can get better cloud performance for Office 365 by paying for ExpressRoute, which is a dedicated non internet based line to MS datacentres.
When you live in South Africa and the best possible latency to Western Europe is 150ms it makes a big difference
 
Edit - Epic math fail! Not worth reading. ;)

Speed of an eletrical signal along copper wire is approximately half the speed of light, so 1.5*10^8 m/s. The distance from South Africa to the UK going by a Google search, providing a car route, is about 14 km. Thus the time taken for a signal to travel from the UK to SA is about (1.4x10^4) / (1.5*10^8) == 0.0000933 seconds, or 0.093 ms.

Your 150 ms ping is because of all the internet crap between you and the server. A dedicated copper or fibre cable to a datacentre anywhere in the world will provide <1 ms latencies.
 
Last edited:
Not quite, even in countries with relatively long distance hops to a datacentre you can get better cloud performance for Office 365 by paying for ExpressRoute, which is a dedicated non internet based line to MS datacentres. Most large deployments go down this route to avoid the issues Shifty highlighted with crappy ISPs, random router configs, and other assorted nonsense over the public internet

Edit


VPNs can't solve your issues with public internet as they still traverse it but encrypted, especially for cloud based PBX solutions (the most immediate example of a latency sensitive app I can think of on Azure right now) the recommended path is to get ExpressRoute over the standard VPN connection
is express route VPN over Azure?
 
No ExpressRoute is a dedicated line from Azure servers to selected partner ISPs, I'm sure it's using a back bone from someone else but it's a separate pipe from the regular internet at every stage.

VPN to Azure is the solution most customers use and if the Cloud PBX assessment says your latencies there are OK we suggest you use that, as you can imagine ExpressRoute is not a low cost solution but if you need truly predictable network perf it's hard to beat
 
No ExpressRoute is a dedicated line from Azure servers to selected partner ISPs, I'm sure it's using a back bone from someone else but it's a separate pipe from the regular internet at every stage.

VPN to Azure is the solution most customers use and if the Cloud PBX assessment says your latencies there are OK we suggest you use that, as you can imagine ExpressRoute is not a low cost solution but if you need truly predictable network perf it's hard to beat
Yea it doesn't sound like something that is usable for mass market.
 
Interesting notion, but then you're having to render/run the game in multiple instances, increasing workload. Instead of rendering the current frame, you render four possible frames, requiring four times the processing power. That's insanely inefficient!

Totally agree it's inefficient, but just throwing the idea out. They can try to predict to help counter the inevitable lag due to the roundtrip but as games are more complex than head rotation it must be far harder to predict and I am not sure you could warp the results.

You could increase render planes and then render background and user objects separately. This way your background could be predicted more accuratly and warped and the user portion they control would be minimal and could be rendered in different states at low(er) overhead?

The model of predict and rollback is not new, fighting games use this for their netcode to keep controls tight.
https://en.m.wikipedia.org/wiki/GGPO


Whatever they are planning is going to have to be pretty out there if they think they have the latency issue beat.
 
I would think that there would still be a latency issue of some sorts with either approach like a dropped frame here or there. I'd like to see a modern 4K rendered game instead of doom3 in a youtube video.
same here. Nothing beats the original hardware on a decent screen. It can be easy to set up for casuals though. Aside from connection issues, the typical ones, it is not a realistic option for those with metered connections or slow connections.

so if I get a new console it is going to be a full-fledged Scarlett, not something else. Still, they could give the option to play via stream for those with Windows PCs without needing the extra box.
 
They are trying to make it work without having to decentralize their existing cloud infrastructure. The more hops to clients the less reliable it will be, and the worse the latency. So the games will have to be written for it. Much more code client-side but it should tolerate glitches much better. The big problem is that it can't offer any old games, and even ports would be difficult to rewrite for that sort of cloud gaming.

OTOH, full streaming like PSNow requires rack space near every major backbone to get a large pool of population with low latency and high reliability. Which is expensive to micro-manage all over the world.

There are no easy solution. But as long as the normal console is still offered, it's not a risky move.
 
Games often have several frames of buffering / processing to maximise frame rate or frame rate stability. 100+ ms just from the game is common.

If you have a fast cloud based implementation and code to minimise rendering time (time slice a very fast GPU to feed multiple streaming clients) you could conceivably deliver frames closer to the time of player input than many traditional console games do.

If gameplay - or at least the latency critical parts of it - are calculated locally but rendering is moved to a *fast* cloud implementation with low latency delivery, then you have solved much of the problem.

Having the local streaming client able to use its client side calculations to intelligently re-use chunks of rendered frames, or even better frame layers (e.g. static, player, object), would allow a workably accurate, and very low latency means of updating what the players sees on screen. If you have your implementation set when designing your streaming client, you could even bake this side of the work into custom hardware using a tiny fraction of the power and die area a GPU would need to do it.

This is exactly how I would try and solve the problem, and what I think MS are planning.

It would require a degree of additional work to make games for both this and a traditional system, but once the processes are in place it should be realistic to make a game in both forms. Some techniques - such as the re-use of previous data, may well benefit traditional systems with a few parameters tweaked.

Edit: with assets stored in the cloud, a full 4K AAA jawdropper that requires 16GB of ram and 200 GB of storage could run on a couple of GB of DDR5 and need only a few GB of local storage - which could easily be streamed to the local device in a matter of minutes.
 
Last edited:
Oops. Math fail. Yeah, I'm out by x1000. ~90ms. I guess data locality makes a big difference then. Light ain't as fast as I thought.
 
Oops. Math fail. Yeah, I'm out by x1000. ~90ms. I guess data locality makes a big difference then. Light ain't as fast as I thought.
Yup tell me about it. Light also travels slower through glass and also bounces around so it travels further than you think.

So when I get latency of 166ms in Rainbow Six Siege it's actually quite impressive.
 
Based on the infrastructure they have and continue to develop because of Azure. They even building two data centers here in South Africa.

They are trying to make it work without having to decentralize their existing cloud infrastructure. The more hops to clients the less reliable it will be, and the worse the latency. So the games will have to be written for it. Much more code client-side but it should tolerate glitches much better. The big problem is that it can't offer any old games, and even ports would be difficult to rewrite for that sort of cloud gaming.

OTOH, full streaming like PSNow requires rack space near every major backbone to get a large pool of population with low latency and high reliability. Which is expensive to micro-manage all over the world.

Exactly. The real solution to streaming games is having the servers basically co-located within your last 10 miles. The idea of speculative execution, etc, is a complicated, inefficient misadventure that attempts to avoid the real solution. But MS sells the CPU time, so throwing away 9/10ths of what you compute for each frame suits them just fine. Hell, that's why their whole power of the cloud gambit was such a joke. It was a business product in search of a problem, not a solution to anything.
 
Exactly. The real solution to streaming games is having the servers basically co-located within your last 10 miles. The idea of speculative execution, etc, is a complicated, inefficient misadventure that attempts to avoid the real solution. But MS sells the CPU time, so throwing away 9/10ths of what you compute for each frame suits them just fine. Hell, that's why their whole power of the cloud gambit was such a joke. It was a business product in search of a problem, not a solution to anything.

Not at all true, throwing away 9/10 of your processing is a solution to no-one, and you absolutely don't need servers within 10 miles. Most of what you see on screen in a video game appears more than 150 ms after you press a button.
 
Not at all true, throwing away 9/10 of your processing is a solution to no-one, and you absolutely don't need servers within 10 miles. Most of what you see on screen in a video game appears more than 150 ms after you press a button.
Speculative execution where many outcomes are processed and sent as frames where only one will be used is extremely wasteful of compute resources by definition. And while there is 150ms delay in many games that time is largely used to generate the frame. Unless we expect a turn around time of sub 10ms for cloud servers the proximity will always be the limiting factor.
 
Back
Top