Xbox Scarlet Hybrid Game-Streaming Version, Kahawai

the bar isn't insanely high here, at least I don't think it is. If the PS4/XBO can do streaming to a PC and that's acceptable. Then if they can replicate that, except instead of having a local device, it's on the web. That's works, and I think it's serviceable for those looking for exactly that type of access
 
Speed of an eletrical signal along copper wire is approximately half the speed of light, so 1.5*10^8 m/s. The distance from South Africa to the UK going by a Google search, providing a car route, is about 14 km. Thus the time taken for a signal to travel from the UK to SA is about (1.4x10^4) / (1.5*10^8) == 0.0000933 seconds, or 0.093 ms.

Your 150 ms ping is because of all the internet crap between you and the server. A dedicated copper or fibre cable to a datacentre anywhere in the world will provide <1 ms latencies.

Over copper? A signal over fiber will travel around 200,000 km per second. That trip will take about 7 ms (ignoring all the factors that come with the delivering data over fiber).
 
Over copper? A signal over fiber will travel around 200,000 km per second. That trip will take about 7 ms (ignoring all the factors that come with the delivering data over fiber).

There's no way that you get 7ms over 13000 km.

I still think the most viable and most beneficial streaming in the near future is local streaming. So streaming from the console you own to any screen on your local network or over the internet. People even complain about the latency when streaming locally.
 
Last edited:
There's no way that you get 7ms over 13000 km.

I still think the most viable and most beneficial streaming in the near future is local streaming. So streaming from the console you own to any screen on your local network or over the internet. People even complain about the latency when streaming locally.

Of course not. It was a simple calculation that used fiber instead of copper which is what shifty used in his theoretical calculation.

Here is some data from 2016 measuring AWS web service from its Virginia servers.

https://datapath.io/resources/blog/aws-network-latency-map/

Here are the current and future locations of MS’s Azure data centers. Both Johannesburg and Capetown are getting Azure data centers.

https://azure.microsoft.com/en-us/global-infrastructure/regions/

Here is a site that actually measures real-time latency from your IP to azure data centers.

http://www.azurespeed.com/

The latency of the closest datacenter to my IP is roughly mid 30s to high 70s in ms.

In terms of latency this provides some real world numbers (as long as you trust MS...LOL) that every individual here can readily judge what level of latency they are going to experience if a streaming Scarlett device were to release right now.
 
Last edited:
It isn't like we're entering for the first time the uncharted territory of game streaming. MS might have some tricks but it'll be in the ballpark of PS Now, Geforce Now. I haven't tried those services but is lag the main thing preventing the general public from subscribing? Or is it more price, game selection, uncertainty, unawareness?
 
I haven't tried those services but is lag the main thing preventing the general public from subscribing? Or is it more price, game selection, uncertainty, unawareness?

I think 'all of the above' applies to prior streaming services, to one degree or another.
 
Here is a site that actually measures real-time latency from your IP to azure data centers.

http://www.azurespeed.com/

The latency of the closest datacenter to my IP is roughly mid 30s to high 70s in ms.

In terms of latency this provides some real world numbers (as long as you trust MS...LOL) that every individual here can readily judge what level of latency they are going to experience if a streaming Scarlett device were to release right now.
From where I live, the Netherlands server bottoms out at 36ms and the Irish one at 60ms. They swing upwards though, Netherlands up to 100ms and the Irish one more frequently, and over 100ms.
Incidentally, that is almost identical to the pings I had when playing on Barrysworld and the servers in Netherlands/Germany back in my Quake3 days. Those upward swings in ping were lethal, not that having 60 in ping wasn't a HUGE handicap at those levels of play when up against opponents around 20ms.
As far as I can see, Internet latency hasn't changed one whit in almost 20 years.
 
I think 'all of the above' applies to prior streaming services, to one degree or another.

Along with inconsistent PQ with artifacting - a bit like playing a game on standard YouTube.

Does make me wonder if Xbox Game Pass was their way of testing the water for a streaming service like PSNow.

I'm not sure on numbers for game pass but I do know it's very often on sale...at this moment new and exciting users can get 3 months for just £8!

I can see the streaming box working but I can't help but feel it will be like comparing the XBox One with the X...which I guess if it gives people cheap entry then that's great for parents or those strapped for upfront cash.
 
There's no doubt game streaming can work as it already exists. The question is what can MS do to make it better than current offerings? The delta-compression doesn't seem a great quality enhancer from the YT demo, and doesn't tackle the network issues, leaving us with the question of what MS can do to make it better?
 
Here's something I'm wondering if it's possible. We've seen games that leverage temporal reconstruction to use the past X amount of frames (Quantum Break used 4 frames for a total final image latency of ~133 ms I believe). Heck you can combine this with techniques like checkerboarding to avoid some of the softness that Quantum Break had during action sequences.

Suppose we redo that slightly so that the "streaming console" renders one out of 4 frames (or whatever MS decides is the sweet spot) to be used for the final image. That would potentially be highly tolerant of most people's internet connections. Even my apartment wifi has lower than 50 ms average latency.

The biggest problem I see in something like this is that my apartment wifi can also be highly variable. My cable internet OTOH rarely goes above 20-30 ms latency to any game hosted in NA (US or Canada).

Something like that will have a "softer" image in action scenes but retain the responsive controls of a locally rendered game. At the same time when the action slows down you'll have a final image that is close to or identical to a locally rendered game.

That wouldn't in theory require too much differences, if any, between a locally rendered game and a composite rendered streamed game. I'm not sure how much that would address the CPU. However, as seen in current generation consoles, the CPU portion of the SOC is comparatively tiny compared to the GPU portion. IE - going with the full CPU but reduced GPU should result in a significantly smaller and cheaper SOC that consumes significantly less power. In theory, that console would also require less memory, further reducing cost and power consumption.

Regards,
SB
 
Here is a longer video from a talk and an actual paper on the tech. Of course paper gives more nitty gritty detail on the tech like SF4 not needing access to source code but doom’s behavior requiring it.

https://www.microsoft.com/en-us/res...quality-mobile-gaming-using-gpu-offload-talk/

https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/mobi093f-cuervoA.pdf

And this tech has been around a while as it was demo’d at mobisys in 2014 or roughly 7 months after the One was released and the paper itself was published in 2015.
 
Last edited:
Here's something I'm wondering if it's possible. We've seen games that leverage temporal reconstruction to use the past X amount of frames (Quantum Break used 4 frames for a total final image latency of ~133 ms I believe). Heck you can combine this with techniques like checkerboarding to avoid some of the softness that Quantum Break had during action sequences.

Suppose we redo that slightly so that the "streaming console" renders one out of 4 frames (or whatever MS decides is the sweet spot) to be used for the final image.

Is that not the I Frame solution shown?
The local client renders full quality frames at a lower framerate and the server fills in the blanks.

The temporal reconstruction (QB) is from prior final frames, the first frames on scene change are lower Res because of this therefore I dobt think it waits 133ms to display the first frame. DF showed scene changes to show how it worked.
 
There's no doubt game streaming can work as it already exists. The question is what can MS do to make it better than current offerings? The delta-compression doesn't seem a great quality enhancer from the YT demo, and doesn't tackle the network issues, leaving us with the question of what MS can do to make it better?

Oh yeah it 'works' just fine...I just never used it properly because it was never close enough to hardware.

Be interesting to see how this plays out, but like I said - MS have an awful track record for overpromising new tech.
 
MS sees the multiplayer gaming as the most profitable and having the infrastructure & know how to manage it at the best tries to go with it at the max.... Dont see any sense to play other kind of games on servers. Technically viable on maybe max 720p@ 30 fps for basic entry level people, 1080p@60 fps for advanced with fiber and such... In the long run maybe 4k who knows

In Europe DVB T2 is coming and 100 dollar android boxes will be saled as bread to equip not so old tvs.... MS adds the possibility to play good games to this kind of boxes by producing one special type ?!? Good idea....

Sony lacks in infrastructure to follow MS in this... And also MS at 7nm can have a One-S derived HW really really cheap and almost ready. Difficulties & investments are on SW & network side on this newly approach model of gaming business.... Good move. Sony will have trouble to follow and needs the alliance of different national network & server providers... In Italy for example Gamestop alredy sells Telecom Italia connections that also has server facilities and such (trough Aruba)...
 
Last edited by a moderator:
MS at 7nm can have a One-S derived HW really really cheap and almost ready.

The article and available tech suggest it'll be a scaled down Scarlett, not a One derivative. If MS are serious about using CPU then the streaming version needs more than the One's Jaguars.
 
Oh yeah it 'works' just fine...I just never used it properly because it was never close enough to hardware.

Be interesting to see how this plays out, but like I said - MS have an awful track record for overpromising new tech.
In the console space the hardware has been great. Whether it amounts to anything substantial is a different story.
Kinect. works as advertised, too bad it got no traction
Xbox One X, works as advertised, the console hits 4K fairly well.
Elite Controller works as advertised
Xbox One works as advertised
The accessibility controller is getting rave reviews as well
Backwards compatibility

the only thing that hasn't panned out was the bogus cloud powered claim.

so it all works pretty well if heard very few issues with hardware units. Not sure if their track record is something to worry about on the product side. Content is another story however.

If MS is having a serious go at this, I can't see them launching a dead end service. It either works well for the target markets where they are selling the device, or it's not launching.
 
I guess things like Hololens are included with that bunkum cloud promise, with the unrealistic representations of what things would look like using it (opaque, massive FOV). MS are no worse than other companies when it comes to over-promising though. OnLive promised super low latency that it never delivered, and Sony's track record speaks for itself.

Point being, it's not about pointing fingers or claiming any company is better or worse than any other, but to be realistic regards any business claims about future products or services. The proof of the pudding is only ever in the eating.
 
I guess things like Hololens are included with that bunkum cloud promise, with the unrealistic representations of what things would look like using it (opaque, massive FOV). MS are no worse than other companies when it comes to over-promising though. OnLive promised super low latency that it never delivered, and Sony's track record speaks for itself.

Point being, it's not about pointing fingers or claiming any company is better or worse than any other, but to be realistic regards any business claims about future products or services. The proof of the pudding is only ever in the eating.

Yes, I could say there was the whole 'balance' of XBox One and I recall other things being said around launch. I could certainly say kinect underperformed, it was definitely sold as more accurate and reliable than it is.

But specifically I was referring to ''exciting new tech", hololens and cloud both very recent examples and now this - we've not seen anything remotely demo'd by MS but people are happy to believe it's a genuine solution to buying the next Xbox.
 
Speculative execution where many outcomes are processed and sent as frames where only one will be used is extremely wasteful of compute resources by definition. And while there is 150ms delay in many games that time is largely used to generate the frame. Unless we expect a turn around time of sub 10ms for cloud servers the proximity will always be the limiting factor.

You don't need speculative execution and many outcomes to be processed. That would be terrible. You render remotely, providing cues such as motion vectors, object type ID etc, then use what's provided to the client to generate the displayed image based on the things you've calculated locally e.g camera and object position.

If you know you're using the data for this purpose you generate the data in the cloud accordingly.

A crude implementation might be providing a "level" buffer generated from a viewport to accommodate camera movement (highly predictable and small increments), and game objects with motion vectors. With the data you calculate locally you can adjust what the viewport shows, and adjust the position of objects. Simple 2D moving and warping stuff. Positions, collision, game logic all feel instantaneous just like they normally do.

It's the grunt work of most of the rendering pipeline that's remote. Any latency sensitive logic is local, and you do the best job you can in manipulating the graphic data you have from the cloud to make it fit what's happening in the instant.

A more involved solution would also be to prioritise streaming latency critical stuff (e.g. gunshot effects once the event is triggered locally) over stuff that could comfortably handle a couple of frames of "make do" re-manipulation (e.g. distant backgrounds during camera panning).

You prioritise what you stream based on the urgency of updating due to latency (triggered event) or image degradation due to growing inaccuracy.

This is an exciting area. For some game types e.g. Halo online, WoW, etc, it could allow gameplay results identical to current implementations, with similar graphical quality, from smaller, cheaper, vastly more power efficient client devices.
 
Running part of the game engine locally, and the graphics remotely, is going to require some effort from game devs.

If engines need to be written specifically for the cloud, they must mandate all games to implement it. I don't see how they would support the whole back catalog. It will need a traditional streaming method for most.
 
Back
Top