Kinect technology thread

They just use the "depth buffer" from the camera to cut your silhouet from the room.I think he's working on that yoga game.He says proudly that they are the only one developper in the world doing that.
with that depth , they can filter out what's not you in the rgb video feed too.That's how they got the scarf.Pretty easy and fast signal processing.
 
Last edited by a moderator:
Apparently kinect uses less than 1% of the 360's cpu

http://www.computerandvideogames.com/article.php?id=258340

Yeah right, if that's true it probably uses 20% of the GPU!

Seriously though, if body tracking only used 1% of CPU time we would see a lot of core games slotting in some kind of gimmicky Kinect support like the Move enabled shooters on PS3.

The fact that it's not suggests that Kinect titles have to be built from the ground up to support it and that it incurs a fairly large performance penalty
 
Yeah right, if that's true it probably uses 20% of the GPU!

Seriously though, if body tracking only used 1% of CPU time we would see a lot of core games slotting in some kind of gimmicky Kinect support like the Move enabled shooters on PS3.

The fact that it's not suggests that Kinect titles have to be built from the ground up to support it and that it incurs a fairly large performance penalty

Implementing kinect is a lot harder to do you need to make/design movements to actions. Move almost a controller the bulb is just the right part and the addon the left part of a controller. So buttons are more easier mapped to actions. And it does help that move support is easy to implement and works fast. Hope i got this right?
 
They just use the "depth buffer" from the camera to cut your silhouet from the room.I think he's working on that yoga game.He says proudly that they are the only one developper in the world doing that.
with that depth , they can filter out what's not you in the rgb video feed too.That's how they got the scarf.Pretty easy and fast signal processing.
They also use skeletal tracking.

your-shape-fitness-evolved-20100614024910776_640w.jpg
 
Yes , but with a faster and much simpler way than with ms skeletal libs (they don't use Ms skeletal libs).
All is in the video link i posted.
If that's so, they could use background removal on PSEye a la Kung-Fu Live and have a PS3 version, which would make a lot more financial sense.
 
A nice video about natal on engadget show.

http://www.engadget.com/2010/06/24/the-engadget-show-010-jimmy-fallon-kudo-tsunoda-microsoft-k/

Jump to about the 50 minute mark where they show what kinect actually sees and the skeleton points they are interpolating from the depth sensor. I thought it was super impressive because looked super responsive and also because how the depth image seemed very detailed and also how the camera had no trouble separating two or even three persons in sight, even when they were interposing.

Before that they show a few games too, all of them seemed really responsive, especially Kinect Adventures, that looked way better than what i've seem from E3.
 
Interesting interview from Gamasutra regarding how Kinect lag can be minimized.​

Blitz Talks Indies Program, Kinect Development, Claims 'No Lag' In Kinect

I've seen two different Kinect fitness games now, and obviously the person playing is trying to sync their movements with what they see the avatar doing on the screen; but the representation of themselves on the screen is off-step with him because the camera is a bit delayed. How do you reconcile that?

AO: There are various technologies involved. Some people are using a skeletal system, and it takes a little bit of time to calculate. It’s only a split second. We're actually using a different masking system, which can tighten things up. But this is all software-based, so where some people might see some little cracks, they're easily fixable by software. That is, the camera fundamentally works and gives you the input; game designers are running forward in a completely new area and learning this stuff. It's like any console. The first few games will look like nothing compared to second and third generation.

Would you say there's not actually any delay between the camera and what's happening onscreen?

AO: It depends on what technology you're using. I have seen a few games with a bit of lag, but that is the software choice of the creators; they've programmed it a certain way, and they'll come up with new techniques. We will tighten and tighten it. There doesn't need to be a lag. We can get it down to maybe two frames behind, which is pretty insignificant; you won't notice. We're just learning new tricks. Ours is pretty tight.

http://www.gamasutra.com/view/news/...inect_Development_Claims_No_Lag_In_Kinect.php
 
Well, that's kinda cheating. ;) The skeleton tracking has lag. Ubisoft have abandoned MS's Kinect skeleton tracking in favour of their own low-latency solution, which defeats in part the wonderfulness of Kinect. how much of their low-latency solution is dependent on the depth information, and how much could be achieved with a basic camera? If the later, Kinect ends up being overkill for this title.

This also means the converse, that skeleton tracking titles will have the higher latency, and titles with low latency aren't probably using that.

The Engadget link posted by LightHeaven shows the skeleton tracking lag, which doesn't look too bad until the movements get quite fast. It also shows a freaking-out skeleton mapping with the guy dressed in black, and the actual depth info is full of holes. It looks like clothing has quite an impact. That said, the 3D image is remarkably good. I'd love someone to create some object scanning system with this!
 
Well, that's kinda cheating. ;) The skeleton tracking has lag. Ubisoft have abandoned MS's Kinect skeleton tracking in favour of their own low-latency solution, which defeats in part the wonderfulness of Kinect. how much of their low-latency solution is dependent on the depth information, and how much could be achieved with a basic camera? If the later, Kinect ends up being overkill for this title.

This also means the converse, that skeleton tracking titles will have the higher latency, and titles with low latency aren't probably using that.

The Engadget link posted by LightHeaven shows the skeleton tracking lag, which doesn't look too bad until the movements get quite fast. It also shows a freaking-out skeleton mapping with the guy dressed in black, and the actual depth info is full of holes. It looks like clothing has quite an impact. That said, the 3D image is remarkably good. I'd love someone to create some object scanning system with this!

Well at least is good to know that developers have options in how to use Kinect creating new software solutions depending in what they need and I hope that with time even when using the skeleton tracking the lag can be reduced.
 
What I was taught is that good programming is all about cheating.
Of course, but that's missing the point entirely just to pick on a phrase. If everyone uses an optical solution to get low-lag Kinect gaming, what's the point in the depth perception, or certainly skeleton tracking? That's the major selling point of Kinect, and it's unfortunate to hear of a developer ignoring it. As a user experience that's valid, and I have no complaints about their choice. In terms of Kinect technology though, wondering about the future of input technologies, don't you consider it something of an eye-opener that the tooted skeleton tracking is being avoided to get low-latency input? I certainly do.
 
Back
Top