Kinect technology thread

Thread spawned from the epic motion control thread. Here we discuss just the implementation of Kinect. I've copied over a few posts as a starter. It'd be worth having a link to a description of the tech if anyone can provide me with something conclusive.

They were at CES, this is likely what people were seeing at the show.

Interestingly there official site links to an article saying that they are in competition with Natal:

http://www.gamerlive.tv/article/ces-2010-video-prime-sense-offers-project-natal-competition

Its an Israeli company. What was the name of the Israeli company that MS bought?

This device does the processing locally aparently but only costs $20-$30 for companies to incorporate into there devices? If true makes MSs decision to remove the processor to get to a $50 pricepoint look a little strange. Link gives a bit of info on how it works also:

http://www.pcworld.com/article/186552/minority_report_interface_shown_at_ces.html

At $20 sounds like a good alternative to PSEye for the Arc system :LOL:


EDIT: Actual specs: http://www.primesense.com/category/reference_design

Quite impressive, accurate to 3mm on xy axis and 1cm on z, at a distance of 2 meters. Id expect finger tracking to be possible at that level of accuracy.
Looking further into it MS bought the company 3DV and was also working with PrimeSense in some capacity. Id take a guess that they are using the time of flight camera tech from 3DV(or there tech is based on same principal so bought them for legal reasons) and possibly the work they did with PrimeScene was bassed on there NITE middleware which is the part that works out the skeletal structure from the 3D image produced by the sensor. Total guess though.

Perhaps Ms didn't ditched the internal processor after all? The Prime Sense site says that they need one of those fancy tvs with onboard processors to be able to use their camera...

What i mean is that perhaps the performance figure Ms gave was just the overhead for using the system, even if all the processing remains inside the camera. (TBH, i thought that 10-15% of 360's resources for all the computing natal is supposed to be doing seemed rather low)

Edit: On anther part of the site they that virtually no processing is required on the host computer using their camera...

But, i just read an interview, and it seems the reason why their camera is less expensive is not because of the on board processor, but actually because they don't use time of flight to evaluate depth, which according to them, needs some expensive sensors...

In some other site i saw its said they actually use the infrared led to project bar codes in the room, the sensor, which is really simple as it doesn't to have the super fast shutter speed time of flight requires, only receive that image, and depth evaluation is made based on the distance from these bar codes... Thought i don't know whether that's true or not.
 
Last edited by a moderator:
I really don't understand why they would bother with the UI navigation if you had to stand to use it. Sure, the voice commands would still be good. It's nice to be able to pause quickly as you're running to answer the phone. But if you need a controller to navigate the menus while sitting anyway, then what's the point? The remote is already in your hands or sitting very close.

As for games, it would definitely limit augmentation of traditional games. Essentially, it becomes a standing full-body platform only. I'm not sure that's a deal breaker, because it's the main reason you'd be buying the thing anyway, but it's definitely a strike against the peripheral if true. A good boxing game will still sell me on this thing.

One thing I'm not sure about in the Kotaku speculation is why it might have problems. Body occlusion shouldn't be the issue here, and if it is, Kinect has much bigger problems than detecting people sitting down.

AFAIK it's the blindingly obvious problem.
http://www.primesense.com/?p=487
(that product is either the natal camera, or something very close to it).

--

There's a depth image at the bottom of the page that they show a lone man, then the same image converted to a skeleton. I'm guessing that extracting the skeleton from that image uses the resources that Rare reported to Journalists '150ms lag, 10-15% cpu usage' or something like that.

--

In the middle of the page there's another picture of 2 people in a couch. I'd expect that extracting the skeletons from that picture uses resources 'several orders of magnitude' greater than the first image. Even then, the skeletons won't be entirely accurate.

(note that isn't a terrible case - the couch isn't soft/deformed, the people are sitting apart, there aren't any objects on the couch/pets etc).

Personally, I'm not sure there's a snowballs chance in hell of it working reliably on a couch.
 
AFAIK it's the blindingly obvious problem.
http://www.primesense.com/?p=487
(that product is either the natal camera, or something very close to it).
...
In the middle of the page there's another picture of 2 people in a couch. I'd expect that extracting the skeletons from that picture uses resources 'several orders of magnitude' greater than the first image. Even then, the skeletons won't be entirely accurate.
This pic shows the low difference, equivalent to low contrast in optical analysis terms, that would result in similar problems to PSEye trying the same job.

However, once the skeleton is mapped, physical constraints tells Kinect that the knee isn't going to be just to the left of the torso. I'm wondering if the problem actually is one of persistent tracking? The skeleton tracking will work, but when points can't be seen, Kinect has to guess, at which point errors can creep in. We've seen crazy, freaky arms and legs where it's got confused with standing players, so the chances for choatic data from seated, fidgetty users is high.

That Kotaku article where MS spokespersons say, "we're calibrating it," points to as much in my mind. In principle the depth perception and skeleton mappnig works, but it's not robust enough to be a consumer experience, and their tweaking it in hopes but it may not make it. That doesn't make their statements a lie, as Kinect is more than just arm waving, and voice interfacing will still work. The experience 'will vary'.
 
Dont know if already posted but saw this:

http://www.neogaf.com/forum/showthread.php?t=398610

They responded to the Kotaku article and basically admitted that currently, body tracking / gesture control does not work while sitting for media control functions, but that they were hoping to fix it for those specific functions before launch.

They did not respond as to whether it will ever work for games. But if they were actually telling developers not to use sitting in games, as reported by Totilo, it doesn't really matter - again, if it's not ready by now, a few months before launch, there won't be enough time for developers to create any new, meaningful games that use the body tracking functions while you're sitting. I don't think it's the sort of thing developers can just flip a switch for in the middle of multi-year game development. EDIT: That is, developers have to assume for now that Kinect games will require players to stand, and they're going to base their design decisions on that - can't be an hours-long experience, has to happen in short bursts that people can tolerate.

http://kotaku.com/5565777/xbox-kinect-does-not-play-well-with-couch-potatoes

UPDATE: A Microsoft spokesperson told me after the publication of this article that the company is certain that Kinect gesture control will work for movies, ESPN and other "entertainment" features before the sensor is launched. As I originally reported, that is not an implemented feature yet. The spokesperson was not able to provide any update on the Kinect's tolerance of a person who sits while playing games.
 
My main suspicion is that they went from color-main tracking in Natal to depth-main, but when color is ignored (as of now?) depth might not have enough information when sitting down.

They are probably adding back color tracking (contrast detect + stuff, pa/ma that's teh way we rotoscope :LOL:) and defining the fine lines between what gets through and what doesn't. It's not that much of a critical issue as latency/consistency is so I do trust that it gets fixed.

*If contrast detect points get a lock on then tracking is very hard to lose.
 
In theory they should always be able to do it - you could do most of this with the Playstation Eye or even EyeToy, and they can still enhance it with their IR image to get rid of most light interference issues. But they've probably worked so hard on getting their SDK ready to do full body tracking, and it was probably so much more work than they expected, that they're not ready for it. They should, at least in theory, be able to model just half a body (the upper half), but they'll probably have to do at least 25% of the work they've done for full body tracking, just to optimise it for that part of the body. I have no doubt they can pull it off, but they may run out of time for launch.
 
Sony kinect? Wii kinect? PC kinect?

What is stopping a pc or even an iphone developer from making a kinect title? is the data coming out of kinect encrypted?
 
What is stopping a pc or even an iphone developer from making a kinect title?
Licensing. Ms won't allow their runtimes to be used in non-approved apps. However they may well (and indeed should) make Kinect available to XNA for PC development, expanding the devices reach and enabling Kinect in MediaPCs. Then again they may have to way until sitting support is robust! But they should definitely through it out there. There's a good chance that any issues they may have with lighting or interference could find a solution from the indie scene. Lots of smart minds out there just looking for problems to solve. ;)
 
Licensing. Ms won't allow their runtimes to be used in non-approved apps. However they may well (and indeed should) make Kinect available to XNA for PC development, expanding the devices reach and enabling Kinect in MediaPCs. Then again they may have to way until sitting support is robust! But they should definitely through it out there. There's a good chance that any issues they may have with lighting or interference could find a solution from the indie scene. Lots of smart minds out there just looking for problems to solve. ;)

But someone else could come up with their own run time and who know, maybe even better ones.
Then you could make a cheap linux based machine with it's own set of apps and software.
 
What is stopping a pc or even an iphone developer from making a kinect title? is the data coming out of kinect encrypted?
Related Q/
Is it possible for a company to make a one handed xbox360 controller to be used in conjunction with kinect?
Something simple with a stick and a couple of buttons (obviously not all buttons as it would be too crowded just a couple of the important ones), or does MS have absolute say of what peripherals are allowed with an xbox360, if not then thats a great business opportunity for some company right there.
 
Related Q/
Is it possible for a company to make a one handed xbox360 controller to be used in conjunction with kinect?
Something simple with a stick and a couple of buttons (obviously not all buttons as it would be too crowded just a couple of the important ones), or does MS have absolute say of what peripherals are allowed with an xbox360, if not then thats a great business opportunity for some company right there.

Why not just use the playstion move controller? I'm betting that this data is not encrypted. But 360s don't have bluetooth. Maybe someone could design a dongle?

edit: yes, from what I remember, MS has a lock on what peripherals work on the 360. 3rd party controllers require some type of key to access 360s.
 
I'm sure if a game company can come up with some kind of novel way of using Kinect with a one-handed controller, then they can go to Microsoft & try to sell the idea to them. If Microsoft likes it, then I'm sure they would license the controller tech to them. However, I think it would have to be novel on the same level of Guitar Hero in order for Microsoft to approve the 2nd controller though. Microsoft are probably going to be very anal about what kind of experiences will be allowed on Kinect. Adding a secondary controller to a controller-less experience might seem as heresy.

Tommy McClain
 
AFAIK it's the blindingly obvious problem.
http://www.primesense.com/?p=487
(that product is either the natal camera, or something very close to it).

--

There's a depth image at the bottom of the page that they show a lone man, then the same image converted to a skeleton. I'm guessing that extracting the skeleton from that image uses the resources that Rare reported to Journalists '150ms lag, 10-15% cpu usage' or something like that.

--

In the middle of the page there's another picture of 2 people in a couch. I'd expect that extracting the skeletons from that picture uses resources 'several orders of magnitude' greater than the first image. Even then, the skeletons won't be entirely accurate.

(note that isn't a terrible case - the couch isn't soft/deformed, the people are sitting apart, there aren't any objects on the couch/pets etc).

Personally, I'm not sure there's a snowballs chance in hell of it working reliably on a couch.

Wasn't 150 ms reported from Rare the entire game lag, including rendering and whatnot and not just kinect delay?
 
Something simple with a stick and a couple of buttons (obviously not all buttons as it would be too crowded just a couple of the important ones), or does MS have absolute say of what peripherals are allowed with an xbox360, if not then thats a great business opportunity for some company right there.

There seems to be an authentication chip in the 360 controllers, even for wired ones. Homebrew arcade stick makers often cannibalize a (increasingly rare) wired 360 controller, though there's a company that has been making unauthorized chips.

I don't know how much MS interferes with peripheral licensing, though.
 
Wow, too many FUD. I'm switching to the w&s attitude. It looks like Ms no matter the product pro and con will face quiet some opposition from the web at least. To some extend it's not that relevant to their intended market but still for month now every time I read a hand-on (think ricochet) every single people instead of trying to take a proper stance to play does his best to make the tech breaks...
Not that I mean in anyway that it's perfect, neither that all the talk about the "sitting incident" let me unconcerned till proper demonstration but come on, people are asking for something perfect every lacking/weakness of the competing products whether it's Move or the Wii are pretty much ignored.
anyway I feel like Ms will have a tough time demonstrating the tech in up-coming show, I feel confident that a lot of people will come to test with the clear intend to make the tech break... Like guys running around the player causing tracking issues, players doing crazy/crazy fast moves not related to the game needs, etc. It will prove difficult for Ms to have some people playing the games instead of testing the tech. I would not be completely surprised if they don't let the public freely test Kinect and enforce a controlled environment in the upcoming show this year which will only feed more FUD.
 
The bowling game does not detect wrist movement. Among others Jeff Gerstman points out that the dev mentioned this wasn't possible with Kinect, and the actual game uses the amount of follow through to determine spin instead (which Jeff also mentions works fine)

(On the Move side, I saw an interview somewhere indicating 1 degree precision on every axis, so we have 360 steps of precision on each axis.)
 
Back
Top