Remote game services (OnLive, Gaikai, etc.)

Bits and pieces, not going to watch 48 minutes of that. It's not like they are the first to tackle low latency encoding, error resilience, bandwidth adaptiveness etc ... it's not easy but it's not untrodden ground. I don't think they are frauds, I'm sure they have an actual encoding algorithm ... I have my doubts about whether they really use 5 mbit/s in their demonstrations and I do not believe they perform the encoding in 1 msec on 20$ worth of FPGAs or DSPs (extra-ordinary claims and all that).

PS. trying to spread out the intra updates is all fine and well, but sometimes the scene changes ... the frame after a scene change is going to have to be a 100% i-frame generally, if you don't have a decent bit budget to code it? Tough cookies, it will just have to look like crap. (You can code it like a p-frame but it will end up looking even worse.)
 
There's no need to mock others' posts. They have a point (even if overly cynical).

Some of the answers in those videos are common/street proposals to their respective problems; but at the end of the day, they don't guarantee that the exact $$$ will match up (e.g., Cheap enough subscriptions without losing money), or that the general quality problem has been solved forever. They optimized the platform for the beta and initial rollout, which may or may not be sufficient to deal with a full nationwide run. The measurements and stats they provide may work only for a small scale roll out (relatively speaking). Need to wait and see. e.g., "Enough" consumers may not mind the quality despite our complains here.
 
It's quite a bit more complicated than what you're thinking, they haven't simple adapted an existing codec, they're written an entirely new codec, and redefined the logical constraints, to deal strictly with the issue of lag/response time. Specifically, they've loosened constraints on errors and failures, and added what sounds like very robust error correction, and they've thrown out the standard GOP method of encoding.

Ugh. No, I am approaching the tech from an analytical point of view and not just swallowing Perlmen's marketing speak which is what this entire presentation is - a more informal version of his GDC 09 presentation. What amazes me is that in a room of engineering students, they completely swallow his explanation of the codec as being "perceptual stuff and a bunch of mathematics" - what he is actually saying does not bear up to any kind of scrutiny whatsoever and I can't believe the students didn't pick him up on any of it.

BTW the GOP method of encoding only introduces lag if you are storing frames from the "future". There's nothing to stop you retaining information from the past. You'll notice that Perlmen stayed well away from using the phrase "intra-frame" even though that is what he expects us to believe it is.

Anyways, what he says is the basically threw out all the restrictions on the codec with regard to looking good when in a still frame, this codec is only dsigned to look in motion, i.e. you will only ever see this feed once: as you're playing. It's specifically designed to tolerate and expect errors, and has a whole bunch of code built in to hide or correct errors on the fly, and an active feedback loop to the server itself.

Um... that would be his "perceptual stuff" and all codecs are the same. h264 in particular, funnily enough. The "feedback loop" stuff - well, I will refer you to Microsoft Smooth Streaming. Also, I should say that Gaikai does something very similar. It's all about switching the quality of the stream according to network conditions. That's what they saying on the showfloor at GDC away from the glitz of the presentation.

There's quite a lot of OnLive info floating about in certain circles which unfortunately I can't repeat but does clear up a lot of the smoke and mirrors.

they had custom chips fabbed, and their sole purpose is to run their codec, this lets them get the per user hardware cost down to something like $30/person (initially the costs was around $10000/person using existing hardware), and literally encode a frame in 1ms. Sounds pretty cool...also shows alot of faith in this codec, that they would go as far as to fab chips for it.

You see all you are really doing here and in most of your post is reciting what Perlman is saying and not questioning any part of it. He said it, so it must be true. A $30 encoder that encodes 720p60 in 1ms, outperforming the best of the broadcast encoders at $50,000 a pop? Doesn't any kind of alarm bell go off, at all? There really is no such thing as a free lunch when it comes to designing silicon.

OnLive is obviously real and it has some amazing features that Live/PSN will probably end up copying (specifically the video capture stuff and integration with mobile devices), but many of the claims simply don't stack up. But they are claims that need to be made in order to position it as a "replacement" to the traditional console/PC.

I would take a look at the Gaikai demo, with all its onscreen stats, use of h264, and general feel of plausibility then look again at OnLive. Assuming a certain amount of "givens" (ie there really is no need to be 60fps), the nuts and bolts of what you are seeing behind the flashy interface is not all that different.
 
You see all you are really doing here and in most of your post is reciting what Perlman is saying and not questioning any part of it. He said it, so it must be true. A $30 encoder that encodes 720p60 in 1ms, outperforming the best of the broadcast encoders at $50,000 a pop? Doesn't any kind of alarm bell go off, at all? There really is no such thing as a free lunch when it comes to designing silicon.

I've already said I think it's too good to be true, obviously alarm bells are going off. At the same time, I have very limited knowledge of encoding chips, and their costs. The only reason I'm posting alot of details to his talk is because it's a 50minute video, and obviously alot of people haven't watched it, your post I responded too was very vague and it was uclear whether you knew any details or where just poo-pooing it without looking into the tech at all.
 
Actually there is not a single similarity that I can see between the Phantom and onLive.
http://en.wikipedia.org/wiki/The_Phantom_(game_system)

The two business models do not resemble eachother in the slightest!

Obviously not the same technically. But quite similar in the amount of over-promising, vague statements, and small smattering of actual hardware... Similar in controlled demonstrations where just about anything "could" be going on.

In other words, similar to many other venture capital grabs that after years of venture capital raising rounds came out to nothing...

Is this to say that Onlive is absolutely definitely something along these lines? No, but the presentations and whatnot so far are ringing a lot of bells.

As said, I'm hoping it's real, just like I was hoping The Phantom and numerous other past "too good to be true" technologies were going to end up being real. But I'm most certainly not going to get excited about it or put too much into it.

I'm hoping it's more 3DFX Voodoo Graphics than The Phantom. But it's ringing way more bells for me than even the Voodoo Graphics did prior to hardware being shown.

Regards,
SB
 
So what that guy on that forum is saying is that essentially the server divides the screen into 16 blocks and only updates one of the 16 blocks each frame (6% of the screen)? Games make you get tunnel vision on the least static parts of the screen so it makes sense to do it like this but I'm guessing it would still feel like there's a lot of screen-tearing. I hope they release some direct feed videos of this soon because it's impossible to tell how something like that works over different genres/graphical styles.
 
No he is saying that the frame is split into 16 rectangles and each is sent to an individual encoder. Because these encoders don't have access to the data sent to the other encoders, this introduces edge artifacting around each rectangle and makes for less efficient encoding - each 1/16th of the screen cannot use the 15/16ths elsewhere for reference/compression savings.
 
Also, their hypothetical encoder chip might be $0.40 or whatever (which I still doubt), but 16 of these chips and the associated board layout & components will be a significantly less sexier $7-10.
 
^ no, CEO mentioned costs of chips in that recent video. Its ~20$ per one user [game stream], plus one more chip for media stream.
 
Also, their hypothetical encoder chip might be $0.40 or whatever (which I still doubt), but 16 of these chips and the associated board layout & components will be a significantly less sexier $7-10.

Ya that was my bad, the price is not nearly that low. I was just going off of memory, and I watched the video at like 3am.
 
I still find the concept interesting, the real issue will be the quality (enough to compel a purchase?) and the business model. One HUGE draw back from Steam is I cannot gift a game I no longer player. I am sure 3rd party publishers love the idea for many reasons, but I am not sure that many PC games on high end hardware + streaming artifacts are going to be persuasively better than current gen games, just from a diminishing returns angle.
 
Isn't the big problem for the service this?: Either they make no headway and the lions (insert console industry or cable industry/ISPs here) don't bother them or they do make a splash and they have instant competition from players which have install bases of 20M+ devices whilst they may only have a few million.

They have a lot of problems whether the system works OR doesn't. Not an easy position to be in really. I think they need a partner with an install base.
 
Even though I'm doubtful about the exact parameters of the video coding (and I'm 100% certain that the quality just won't do for me personally) doesn't mean to say that it's exactly easy. With the big telecom companies for instance I expect only delays and going over budget, so no immediate threat. Microsoft might be able to handle it if they find a good manager to put on the team, but politics from the XBox side would almost certainly cripple their efforts.

I could see google swallowing up Gaikai and being very dangerous to them in short order, and as an outsider Valve.

PS. I just remembered all the discussions with Vince about cloud gaming in the good old Cell discussion days before we knew the specs (he saw Cell driving cloud gaming server parks). A true visionary about cloud gaming, dead wrong about Cell though.
 
I was talking with a friend and he mentioned that all of their talk about "perception science" mumbo jumbo may mean that they were planning on using the Z-buffer to adjust coding in blocks, so that stuff further away would get lossier coding. For example, if they used wavelet encoding like in JPEG2000, they could simply order the stream so that blocks with closer Z values get higher resolution wavelets, while blocks in the distance get low res ones, and if there is left over bandwidth, then the distant blocks can get 'caught up" as higher-res wavelets come in later (or get dropped if a bandwidth threshold is reached)

That sounded like a good idea that's semi workable (you don't always want distant stuff to be lossy), I'm suprised this 'revolutionary' compression they've supposedly created is just a more mundane parallelized codec.
 
Isn't the big problem for the service this?: Either they make no headway and the lions (insert console industry or cable industry/ISPs here) don't bother them or they do make a splash and they have instant competition from players which have install bases of 20M+ devices whilst they may only have a few million.

They have a lot of problems whether the system works OR doesn't. Not an easy position to be in really. I think they need a partner with an install base.
I don't see a console manufacturer doing that after they've already sold the consumer $300 worth of hardware. Onlive is going for a pretty unique niche that has broadband and decent disposable income to spend on a game subscription service but that doesn't want to deal with the hardware and is probably willing to take a performance hit for that. It'll probably be at least a few years after Onlive is successful for competitors to spring up and it'll probably have a good base by then.
 
I don't see a console manufacturer doing that after they've already sold the consumer $300 worth of hardware. Onlive is going for a pretty unique niche that has broadband and decent disposable income to spend on a game subscription service but that doesn't want to deal with the hardware and is probably willing to take a performance hit for that. It'll probably be at least a few years after Onlive is successful for competitors to spring up and it'll probably have a good base by then.

Why not? You need a certain amount of hardware anyway to run this service on (decode compressed stream, internet connection, etc.), and the current consoles may well be able to do it. It doesn't hurt that they can do a lot of other things as well, or that they are already in the homes of consumers!
 
I was talking with a friend and he mentioned that all of their talk about "perception science" mumbo jumbo may mean that they were planning on using the Z-buffer to adjust coding in blocks, so that stuff further away would get lossier coding. For example, if they used wavelet encoding like in JPEG2000, they could simply order the stream so that blocks with closer Z values get higher resolution wavelets, while blocks in the distance get low res ones, and if there is left over bandwidth, then the distant blocks can get 'caught up" as higher-res wavelets come in later (or get dropped if a bandwidth threshold is reached)

That sounded like a good idea that's semi workable (you don't always want distant stuff to be lossy), I'm suprised this 'revolutionary' compression they've supposedly created is just a more mundane parallelized codec.

A really neat idea Democoder :).
 
Back
Top