Kinect technology thread

So Kinect is indeed limited by Ms's choice to support non-slims models, not just due to manufacturing costs.

But the part he says kinect cameras are actually high res cameras let wondering, if ms went to much higher res than needed for now, in order to keep current kinect when their new console launches.

It would be really cool to instantly benefit from a higher res camera without the need to buy a new for their next console...



But Ms's full body tracking does not seems to take several frames, every time they show kinect's vision, among their skeletal tracking they are always in sync, and the video feed is juts barely delayed, just like all usb cameras are. Even by watching move footage you can see that the video feed is a little bit behind, and that's a 60fps camera XD



I don't take that as trolling :p

But using color + known shapes/sizes is exactly what sony is doing in those tech demos and Move. There's even a video where a dev shows how move tracking works and that you can hide parts of the ball so Ps3 thinks the stick moved back.

There's nothing wrong in that, as it can be very freaking precise, but i just kinda want to see how far can just a depth sensor camera go in that department :p

I think what the dev was talking about is the fact that Kinect has a 640x480 depth camera but on the 360 they can only use it at 360x240 @ 30FPS

unless they are talking about the 1600x1200 Color image size used for taking pictures but I'm not even sure if thats used in the Kinect design.
 
Essential DF article about USB speeds limiting Kinect. Makes yer think!

Hmm, the article says the usb 2.0 connection is the limiting factor with this line:

"But they can't give the full resolution picture, at the full frame-rate, because of the USB 2.0 connection. It's just the technicalities of the Xbox. "

...which would imply that usb 2.0 would limit the pc as well. So does that mean Kinect is a usb 3.0 device and that's how hackers are using it at full bandwidth? Or that the 360's usb implementation is not full 2.0 spec?
 
Hmm, the article says the usb 2.0 connection is the limiting factor with this line:

"But they can't give the full resolution picture, at the full frame-rate, because of the USB 2.0 connection. It's just the technicalities of the Xbox. "

...which would imply that usb 2.0 would limit the pc as well. So does that mean Kinect is a usb 3.0 device and that's how hackers are using it at full bandwidth? Or that the 360's usb implementation is not full 2.0 spec?

The suggestion in the article is very clearly that the USB 2.0 is using a bus that it needs to share with other devices, and to make sure all devices get an expected throughput, the designers have allotted each device (or device group) a portion of the available bandwidth that leaves the USB 2.0 connection with less than its expected max throughput.

I think what is also being suggested is that Kinect won't be able to do 640x480 video in combination with the depth sensor, something I previously didn't expect to be possible because of lag (but that's wrong in the sense that you can easily delay the feed to match the input if necessary), but now perhaps it's held back by this. Else I would have expected to see it very early on, and not just some screenshots as we're seeing all the time now.
 
Hmm, the article says the usb 2.0 connection is the limiting factor with this line:

"But they can't give the full resolution picture, at the full frame-rate, because of the USB 2.0 connection. It's just the technicalities of the Xbox. "

...which would imply that usb 2.0 would limit the pc as well. So does that mean Kinect is a usb 3.0 device and that's how hackers are using it at full bandwidth? Or that the 360's usb implementation is not full 2.0 spec?

I think the 360 USB 2.0 ports are not full spec.
 
The suggestion in the article is very clearly that the USB 2.0 is using a bus that it needs to share with other devices, and to make sure all devices get an expected throughput, the designers have allotted each device (or device group) a portion of the available bandwidth that leaves the USB 2.0 connection with less than its expected max throughput.

Ah yeah my bad, was eating a sandwich so I had just skimmed the article quick. Well that explains that. At least the good news is that the same camera will get insta-upgraded on the next gen machines which is cool.
 
Ah yeah my bad, was eating a sandwich so I had just skimmed the article quick. Well that explains that. At least the good news is that the same camera will get insta-upgraded on the next gen machines which is cool.

Joker have you had a go at Kinect? What are your impression of the device.
 
Essential DF article about USB speeds limiting Kinect. Makes yer think!

Oh, so they did go with the original res depth cam after all - i'm satisfied that I was wrong about the downgrade to QVGA.

And wouldn't it be possible to use the full depth cam res, despite the USB 2.0 limitation?
Say Kinect transmits a 320x240 colour feed and 640x480 depth feed, instead of the opposite which it currently uses?

Can the guys working on Kinect homebrew use the full bandwith since their PCs support USB 2.0 or can Kinect only output at USB 2.0 spec?
 
Joker have you had a go at Kinect? What are your impression of the device.

Nopers, never tried it. I actually don't have much interest personally in motion controls, although my wife has mentioned wanting one so I probably get it eventually. Amusingly though, I can probably use Kinect for my new business so I could tax deduct it :)
 
The depth camera might be 640x480 but the processing of the IR image by the onboard processor might use 4 pixels to calculate the depth.
 
Too many vids to list at once, but there are some really interesting homebrew applications here: http://kinecthacks.net/

Also, I find it curious that Your Shape has almost zero lag, but appears to be using the skeletal tracking and depth sensor data.. I wonder what they are doing different that enables such a difference and can we expect a lessening of noticeable lag in future titles.

So far, for me, only Kinect Adventures really stand out with a little bit noticeable in Table Tennis for Kinect Sports.
 
Nopers, never tried it. I actually don't have much interest personally in motion controls, although my wife has mentioned wanting one so I probably get it eventually. Amusingly though, I can probably use Kinect for my new business so I could tax deduct it :)

I would be really curious on what you have to say on the technology if you ever got a hold of one.
 
The depth camera might be 640x480 but the processing of the IR image by the onboard processor might use 4 pixels to calculate the depth.
The tech is all starting to get a little confused now! According to Prime Sense, the depth data is quarter the res of the CCD data, but the PC shows this isn't the case. Or the camera is ~1280x960 in res. Then there is the fact the PC is getting full res feeds when 360 doesn't, there must be a communication protocol to say to the hardware "I just want a lowres feed" and the Kinect device is downscaling for the lower-bandwidth port.

I like this one>
The biggest question has to be why MS didn't release an XNA kit? There'd have plenty more exploring ideas with the full skeleton tracking to boot.
 
Too many vids to list at once, but there are some really interesting homebrew applications here: http://kinecthacks.net/

Also, I find it curious that Your Shape has almost zero lag, but appears to be using the skeletal tracking and depth sensor data.. I wonder what they are doing different that enables such a difference and can we expect a lessening of noticeable lag in future titles.

So far, for me, only Kinect Adventures really stand out with a little bit noticeable in Table Tennis for Kinect Sports.

because Your Shape is more like the EyeToy games & use your image on screen instead of using the Kinect tracking points to control a on screen avatar.

the avatar's controls are made up of a lot of preset gestures & kinect use the tracking points to match your movements up with these preset gestures that they have in their database & it has to guess at your movements every frame & recreate that on screen.

that's what I remember from the patent & a few articles I read.

with Your Shape they don't have to recreate your movements because it's you on the screen.
 
That doesn't sound quite right. Your Shape simply does its own interpretation of the 3D feed directly. As there are very fixed scenarios to which it needs to match your movements, it is much more efficient to do it like that. And particularly with stuff like Tai Chi the movements are very slow, so you can go for a great deal of accuracy too, and you're not limited to the 22 joints that the avatar stuff gives you.

Basically everything that has you, the player, match an example on screen (which includes Dance Central) it is very likely to be easier, but more importantly, faster to do so by directly taking the 3D feed.
 
That doesn't sound quite right. Your Shape simply does its own interpretation of the 3D feed directly. As there are very fixed scenarios to which it needs to match your movements, it is much more efficient to do it like that. And particularly with stuff like Tai Chi the movements are very slow, so you can go for a great deal of accuracy too, and you're not limited to the 22 joints that the avatar stuff gives you.

Basically everything that has you, the player, match an example on screen (which includes Dance Central) it is very likely to be easier, but more importantly, faster to do so by directly taking the 3D feed.

Didn't turn 10 also made their own way to process kinect data instead of using the standard way..
Because they had sitting down and squatting already in game with forza kinect demo.
 
because Your Shape is more like the EyeToy games & use your image on screen instead of using the Kinect tracking points to control a on screen avatar.

the avatar's controls are made up of a lot of preset gestures & kinect use the tracking points to match your movements up with these preset gestures that they have in their database & it has to guess at your movements every frame & recreate that on screen.

that's what I remember from the patent & a few articles I read.

with Your Shape they don't have to recreate your movements because it's you on the screen.
Its not just image, they definitely track your joints and position in a 3d space.

Ie: There are workouts you have to punch and kick green boxes. The game can tell if you used the correct arm/leg and if you reached far enough to hit them.
 
That doesn't sound quite right. Your Shape simply does its own interpretation of the 3D feed directly. As there are very fixed scenarios to which it needs to match your movements, it is much more efficient to do it like that. And particularly with stuff like Tai Chi the movements are very slow, so you can go for a great deal of accuracy too, and you're not limited to the 22 joints that the avatar stuff gives you.

Basically everything that has you, the player, match an example on screen (which includes Dance Central) it is very likely to be easier, but more importantly, faster to do so by directly taking the 3D feed.

what doesn't sound quite right? you said pretty much the same thing I said we just went different ways about saying it.

Your Shape use the video from the 3D depth camera much like the EyeToy games used the video from the Eyetoy but with the depth cam you get on the fly background removal so it's just your image on the screen & you have depth information .
 
Back
Top