Kinect 2: the second iteration motion control

Xenio

Regular
Banned
this is a good demo that show the improvements from the old kinect

http://video.wired.com/watch/new-xbox-kinect-exclusive-wired-video-398878

it detect every single element on face, single eyes, mouth, etc and analize the voice to do emotion modelling

it does muscular tension recognition and it's capable to see if any skeletal node rotates.

And the latency is way way low, hard to spot on

This time around, a 1080p camera enlarges the sensor’s field by 60 percent—a fact that the entertainment division’s lanky hardware guru, Todd Holmdahl, demonstrates for me by walking his 6' 4" frame toward the sensor. Even 3 feet away, the Kinect's onscreen display clearly registers his entire body, and he still has room to lift his hands above his head.

The camera can also capture video at 60 frames per second for two-way services like Skype—but more impressive still are the Kinect’s tracking capabilities. It’s now so sensitive that it can measure your pulse by monitoring pigmentation change in your face. (It’s partially done via infrared light, which means it works regardless of skin tone.)

It’s essentially like turning each pixel of the custom-designed CMOS sensor into a radar gun, which allows for unprecedented responsiveness.
The original sensor mapped people in a room using “structured light”: It would send out infrared light, then measure deformities in the room’s surfaces to generate a 3-D depth map. However, that depth map was lo-res to the degree that clothing and couch cushions were often indistinguishable. The new model sends out a modulated beam of infrared light, then measures the time it takes for each photon to return. It’s called time-of-flight technology, and it’s essentially like turning each pixel of the custom-designed CMOS sensor into a radar gun, which allows for unprecedented responsiveness—even in a completely dark room. (See the video for the evidence.)

In fact, the Kinect will be used for that most fundamental of tasks: turning the whole thing on. Xbox One utilizes multiple power states; it can thus ramp up as needed and consume different amounts of juice depending on use, whether games or movies. And it also possesses a low-power standby mode, allowing Xbox Live and game updates to be pushed to the Xbox One overnight — or whenever the box knows your usage is lowest — without keeping the console all the way on. (Don’t worry; you can still play a single-player game without being connected to the Internet.) It also means that when you walk into your room and say “Xbox on,” the Kinect sensor hears you and turns on your entire setup via infrared blast: TV, Xbox One, even your cable box.
 
Looks like a massive improvement. The wrist/thumb tracking clearly is still a bit of a stretch, and I don't buy the heart rate thing being reliable (at least not from this video, doubt his heart rate was 60 ;) ) but assuming they are also dropping the whole thing that games cannot also use the controller, that should be much less of a problem right there. The fact that it works in the dark as well as it does is really important, and I think one that Sony didn't bother enough about. Also, Kinect being default now will be a massive improvement in terms of OS level and game support. I think the danger for Sony is that while Sony has some of this stuff covered with the PS Eye / controller combo, they won't be able to rely on it well enough because their system won't work in the dark, they will struggle to find a balance between what to use where (between PS Eye, Controller and Move), and not have a unified vision and SDK right out of the box, meaning that Microsoft will have a big up on them in that area.

We'll see and know more come E3 though ...
 
Wow, TOF! Interested in seeing investigations into this. The depth tech is mm level which is something TOF was supposed to struggle with. I don't suppose the camera is cheap.
 
I don't suppose the camera is cheap.
The optics for it looks pretty dang major, 10, 20x larger (maybe more) compared to any regular webcam. Also check out the IR LEDs used - three also very very major pieces. They probably cost a bundle as well and not surprising they need to be fan cooled.
 
maybe could be iteresting how the TOF technology actually work

http://www.imagesensors.org/Past Workshops/2009 Workshop/2009 Papers/058_paper_Oggier_invited.pdf

seems kinda complex to me

463px-TOF-camera-principle.jpg


The simplest version of a time-of-flight camera uses light pulses. The illumination is switched on for a very short time, the resulting light pulse illuminates the scene and is reflected by the objects. The camera lens gathers the reflected light and images it onto the sensor plane. Depending on the distance, the incoming light experiences a delay. As light has a speed of approximately c = 300,000,000 meters per second, this delay is very short: an object 2.5 m away will delay the light by:

a002689a3bb794fd7231f7075faabeec.png


The pulse width of the illumination determines the maximum range the camera can handle. With a pulse width of e.g. 50 ns, the range is limited to

3b01a5711481167055519df1ed34f083.png


These short times show that the illumination unit is a critical part of the system. Only with some special LEDs or lasers is it possible to generate such short pulses.

The single pixel consists of a photo sensitive element (e.g. a photo diode). It converts the incoming light into a current. In analog timing imagers, connected to the photo diode are fast switches, which direct the current to one of two (or several) memory elements (e.g. a capacitor) that act as summation elements. In digital timing imagers, a time counter, running at several gigahertz, is connected to each photodetector pixel and stops counting when light is sensed.

In the diagram of an analog timer, the pixel uses two switches (G1 and G2) and two memory elements (S1 and S2). The switches are controlled by a pulse with the same length as the light pulse, where the control signal of switch G2 is delayed by exactly the pulse width. Depending on the delay, only part of the light pulse is sampled through G1 in S1, the other part is stored in S2. Depending on the distance, the ratio between S1 and S2 changes as depicted in the drawing.[9] Because only small amounts of light hit the sensor within 50 ns, not only one but several thousands pulses are sent out (repetition rate tR) and gathered, thus increasing the signal to noise ratio.

After the exposure, the pixel is read out and the following stages measure the signals S1 and S2. As the length of the light pulse is defined, the distance can be calculated with the formula:

3d6ed2820424715b58977d3ed84fe2d1.png


In the example, the signals have the following values: S1 = 0.66 and S2 = 0.33. The distance is therefore:

1aee90c8eb688976832433cbdc143fd4.png


In the presence of background light, the memory elements receive an additional part of the signal. This would disturb the distance measurement. To eliminate the background part of the signal, the whole measurement can be performed a second time with the illumination switched off. If the objects are further away than the distance range, the result is also wrong. Here, a second measurement with the control signals delayed by an additional pulse width helps to suppress such objects. Other systems work with a sinusoidally modulated light source instead of the pulse source.

this is a simple implementation, have no idea how much the Kin2 is more complex..
 
Yeah I am being pedantic but still this is a tech forums, let's be technical ;)

It does say approximately. Doesn't seem particularly important at distances of 3 - 10 feet. ;)

I'm guessing the Xbox One will come with a Kinect Sports type game. Maybe not. Hopefully something like that will be ready for E3. I'm curious to see how well the new skeletal processing picks up overlappy limbs. It looked a lot better in the demo vids I saw. Every once in a while I did see a little glitch, but nothing like Kinect 1.
 
Around -0.32 in the video you can see that the controllers are emitting as well. Seems to be that the area that now houses the guide button is simply to contain IR emitters. A bit similar to the DS4 setup, but you don't have the color bars visible.
 
Just though you guys might be interested in this video. It´s from a video amplification paper, that was presented at Siggraph 20xx. It shows heartbeat detection through video, and it´s impressively accurate.

 
Around -0.32 in the video you can see that the controllers are emitting as well.
I think it's simply reflective. Since the kinect itself is putting out a fair amount of IR already, there's probably little good reason to burn power on IR leds in the controller...
 
this is a good demo that show the improvements from the old kinect

http://video.wired.com/watch/new-xbox-kinect-exclusive-wired-video-398878

it detect every single element on face, single eyes, mouth, etc and analize the voice to do emotion modelling

it does muscular tension recognition and it's capable to see if any skeletal node rotates.

And the latency is way way low, hard to spot on

Wow! Now THAT is what I've been waiting for with the new xbox. I'm far more impressed with that than the whole official reveal. Now if we see some games make good use of this (which they no doubt will) then I'm fully on board!
 
kinect one allow props... maybe kinect two still allow props.
Some games on kinect one also suggest using props
 
Yeah I am being pedantic but still this is a tech forums, let's be technical ;)

Technically light never travels less than c. What is taught to you in introductory physics classes in high school/college is referring to a sort of 'drift velocity' of light, accounting for the time averaged promotion/decays of the electronic states in the atoms/molecules of the medium. ;)


Sooo...yeah, Kinect eh? This looks extremely impressive. They are doing way more than I had expected, and I was basing my already optimistic expectations on their MSR and patent works. Here is another video. This one shows off the voice filtering too, which is likewise amazing: http://arstechnica.com/gaming/2013/05/video-watch-us-flail-in-front-of-the-xbox-ones-new-kinect/

So errr...at what point does this actually start offering compelling, unique gaming interactions for core gaming because these vids seem to indicate the tech is absolutely there. This could be a big deal. A VERY big deal. :D
 
I though Kinect 2 was the most impressive thing to come out of the reveal.

Just the sheer nerd factor of determining your mood, muscle dynamics and heartbeat just by watching you impresses me.

Having said that depth resolution isn't much better than v1, 512x424 approx 2.8x the ridiculously low res 320x240 depth info Kinect 1 worked with. I'd have liked to see at least VGA res for depth if not 720p. They could perhaps track fingers then.

I wonder if Kinect has have more onboard processing this time around.
 
Speed is a much more important improvement than resolution, imho. For most purposes, it seemed realtime / lag-free enough. The fact that it is now meant to be able to work well in combination with a controller is hopeful as well.

I did see another demo by the way where the heartbeat tracking did seem to work properly (varying between 60-70).

There are some other neat tools being developed in that area as well by the way, among others coming from a competition to make something like the Star Trek medical device (forgot it's name - tricorder?)
 
Back
Top