Stadia, Google Game Streaming platform [2019-2021]

I believe Google miscalculated. Instead of using 10Tflops Vegas, it should have used Polaris and spent that costing savings on Windows licensing. Buying a sub should have consisted of streaming any game in your library supported by Stadia. They could of offered additional sub tiers that provided free access to PC games and services like YouTube Red or Google music.
 
Last edited by a moderator:
I believe Google miscalculated. Instead of using 10Tflops Vegas, it should of used Polaris and spent that costing savings on Windows licensing. Buying a sub should of consisted of streaming any game in your library supported by Stadia. They could of offered additional sub tiers that provided free access to PC games and services like YouTube Red or Google music.
Vega was required I believe due to the virtual GPU technology, and likely far more highly desired to utilize the servers for compute purposes when not active for gaming.
 
Vega was required I believe due to the virtual GPU technology, and likely far more highly desired to utilize the servers for compute purposes when not active for gaming.

Well given the state of Stadia, compute is virtually all that Vega hardware is good for now.
 
Well given the state of Stadia, compute is virtually all that Vega hardware is good for now.
Perhaps Google only let Harrison try cloud computing if they knew it'd fail and wanted the hardware investment to not be a complete waste?
 
Wait .... I don’t understand. Ppl actually have to buy the games as well? Just like they would with console or PC.
I thought it’s like Spotify, eg you pay 20 dollars a month and then can play whatever you want, it doesn’t work like this?
How in fucks name then do they have such high numbers of ppl using it for eg destiny 2. 8020 is about 8000 more than I think would actually actually pay for such a service, what gives?
 
Wait .... I don’t understand. Ppl actually have to buy the games as well? Just like they would with console or PC.
Yes. And they're Linux games, not Windows, so limited library.
I thought it’s like Spotify, eg you pay 20 dollars a month and then can play whatever you want, it doesn’t work like this?
No.
How in fucks name then do they have such high numbers of ppl using it for eg destiny 2. 8020 is about 8000 more than I think would actually actually pay for such a service, what gives?
They're far from high. You get people who haven't got a console of PC able to play these games, so there's going to be some market, and generally speaking you'll always find some folk who'll try something new or bizarre. Mostly though, Destiny 2 was given away as part of the opening Stadia purchase, so basically everyone with Stadia has Destiny 2. Those numbers should be somewhat representative of best-case adoption for the platform where everyone with a new Stadia pack has a look at the game that came with it.
 
Mate that seems super high, I just googled. One needs to pay $130 to take part in stadia
https://www.tomsguide.com/reviews/google-stadia
Ok it’s also $10 a month, plus you have to buy games as well!

I just checked, I’m no expert so correct me if I’m wrong. Now with ps now (where you don’t have to buy the games), there’s hundreds available to play included with your $10 a month.
Ok the initial outlay is more 250 vs 130, though no doubt you can get a ps4 2nd hand for 130 or less. But after then it’s cheaper and you have more far more selection or games, no doubt with less latency as well.
Considering this, 8000 (ginuea pigs) is more than I would of though who would take the bait. :oops: Very weird, did these 8000 think it was a streaming service, like I did.
Wow unless it turns into a Spotify like streaming service it will be dead in a year
 
Just dont read the Stadia reddit unless you want to see the most amount of hope ever.
 
May not be out yet, but there's a free version coming also.
You don't get 4k and 5.1DD audio though.
But having to buy your games and lack those features seems a bit excessive to me, if games are full price.
I suspect I'm not understanding the pricing model very well.
 
Is this what they are saying is causing the huge loss in performance compared to the paper specs?

It's what 1 person had said in a blog post, it's not what devs have said and it's not what Google has said, but it seems to be what Stadia users are holding onto for Hope of better performance.

I suggest reading the Linus reply directly and not the reddit summary from someone else, as the two seem very different to my reading. It sounds very much like Linus said the tests done were wrong that was what the blogger used to say its spinlocks, so the blogger is just flat out wrong. I'd go so far as to say the reddit summary is wrong as well as it glosses over the original blog being completely wrong.

Again, read the entire Linus post, not just the beginning opening statement, but it's a great intro. Here's some of the better snippets from Linus.

https://www.realworldtech.com/forum/?threadid=189711&curpostid=189723

The whole post seems to be just wrong, and is measuring something completely different than what the author thinks and claims it is measuring.

...

And then you write a blog-post blamings others, not understanding that it's your incorrect code that is garbage, and is giving random garbage values.

...

No, they aren't cool and interesting at all, you've just created a particularly bad random number generator.

...

This has absolutely nothing to do with cache coherence latencies or anything like that. It has everything to do with badly implemented locking.​
 
Is this what they are saying is causing the huge loss in performance compared to the paper specs?
I read some responses by devs on twitter. The response from Linus felt aggressive. Developers are certainly upset; they did some basic trace performance testing and this is the behaviours they found.

 
Last edited:
That seems like just 1 dev, not multiple devs, and still using the same horribly wrong test program.

As Linus writes from one of his other RWT followups:

See what I'm trying to explain here?

The fact is, doing your own locking is hard. You need to really understand the issues, and you need to not over-simplify your model of the world to the point where it isn't actually describing reality any more.

And no, any locking model that uses "sched_yield()" is simply garbage. Really. If you use "sched_yield()" you are basically doing something random. Imagine what happens if you use your "sched_yield()" for locking in a game, and somebody has a background task that does virus scanning, updates some system DB, or does pretty much anything else at the time?

Yeah, you just potentially didn't just yield cross-CPU, you were yiedling to something else entirely that has nothing to do with your locking.

sched_yield() is not acceptable for locking. EVER. Not unless you're en embedded system running a single load on a single core.

If I haven't convinced you of that by now, I don't know what I can say.

...

Dealing with reality is hard. It sometimes means that you need to make your mental model for how locking needs to work a lot more complicated. And sometimes it means that you need to keep your OS kernel doing stupid things because people inadvertently depended on the timing of said stupid things.

Which, as mentioned, is a problem for sched_yield() too. Lots of users, all of which are basically buggy-by-definition, and all you can do is a bad half-arsed job of trying to make it "kind of work".

Reality is messy.​
 
That seems like just 1 dev, not multiple devs, and still using the same horribly wrong test program.

As Linus writes from one of his other RWT followups:

See what I'm trying to explain here?

The fact is, doing your own locking is hard. You need to really understand the issues, and you need to not over-simplify your model of the world to the point where it isn't actually describing reality any more.

And no, any locking model that uses "sched_yield()" is simply garbage. Really. If you use "sched_yield()" you are basically doing something random. Imagine what happens if you use your "sched_yield()" for locking in a game, and somebody has a background task that does virus scanning, updates some system DB, or does pretty much anything else at the time?

Yeah, you just potentially didn't just yield cross-CPU, you were yiedling to something else entirely that has nothing to do with your locking.

sched_yield() is not acceptable for locking. EVER. Not unless you're en embedded system running a single load on a single core.

If I haven't convinced you of that by now, I don't know what I can say.​
A couple of devs did retweet him. It does come across as; dev's couldn't get the performance they wanted. They did some research. Found issue, wrote about it. Linus got defensive, wrote about it.

I don't know if this will solve things, we will see.
 
But was the original blog actually from a dev and using what was used in games? What I see is someone writing horribly bad and broken code and then devs are commenting and discussing characteristics of that.

As for it having any sort of positive impacts for Stadia, I think not a chance.
 
But was the original blog actually from a dev and using what was used in games? What I see is someone writing horribly bad and broken code and then devs are commenting and discussing characteristics of that.

As for it having any sort of positive impacts for Stadia, I think not a chance.
I think devs were looking for validation that sched_yield() was broken by its behavior. Windows and I assume Sony's OS doesn't deal with it in this way, as per the commentary on twitter.. Yes there is a more appropriate way to do it (described by Linus), but it did validate to devs to not use sched_yield() on Linux.
 
About those Linux kernel Scheduler issues on stadia, it seems Linus responded to the one blog post about it.


  • The linux scheduler has to optimize for a lot more other cases than what some games apparently do, as linux is not primarily used on desktops
  • Optimizing this particular use case would almost definitely negatively affect the scheduling effectiveness overall (but might improve latency)
Which still puts you at the mercy of the Linux scheduler which isn't optimized for desktop use cases and I'm not sure it'd actually help with games.

It's also why we have a consumer version of Windows (home and professional) and an enterprise version of Windows. Each one handles scheduling differently, albeit you can configure Enterprise Windows to schedule similarly to consumer Windows. Windows configured for enterprise workloads generally doesn't perform as well in games if something comes up that is a higher priority in the Enterprise space than it is in the consumer space. This is very noticeable on Windows Home Server 2011 (I really need to retire this OS) when the OS has to do its server related things even if something else is happening in the foreground (like a game or video being played back).

If this truly is at the root of the problem, how inept were the Stadia developers to not notice or attempt to rectify the situation prior to launch? And if it wasn't something easily solved in time for launch, they why launch and/or why not inform developers of this problem?

As this is open software, in theory couldn't they have gotten a lot of people to work on the issue?

Or is this just a red herring?

Regards,
SB
 
Back
Top