The hardware in Kinect 2.0 looks to be amazing where is the software to show it off?

Its got a lot of lag

Fairly little actually.

, yet they have glitches where joints rapidly fly out to impossible angles (without breaking your leg) and then back again within a splitsecond
some simple math smoothing function would mitigate these (& with the lag its easy)

lol easy.

'Smoothing' based on future frames adds latency; extrapolating and testing against reasonable boundaries will add to the processing load and risk false negatives.

Not every game or demo will require the same level of glitch elimination. You don't know what features or goals these demos have, or what the Kinect SDK allows, and yet are attempting to draw an absolute conclusion about the technology and the skills of the developers.

This reeks of naivety.

Also like I say impossible body postions, parts of the body entering other parts of the body etc
youve gotta wonder does MS only have a single (not very good) programmer(*) working on the
software

I doubt anyone with an opinion of any value thinks that.
 
'Smoothing' based on future frames adds latency; extrapolating and testing against reasonable boundaries will add to the processing load and risk false negatives.
you will note, I said 'lot of lag' i.e. I was well aware of the induced latency involved, but because of this 'lot of lag' you have a bit of leeway since its laginess? limits the app's you can do with it (dance/fitness/party see the kinects's current library for examples)
what Im saying is what is often done with input, controls and esp nowadays with the accelerometer stuff in phones etc.
function Im not sure if youre aware but if you use the raw input data its often jumping all over the place, (visually like the glitches in the above vid) what you do to combat this is filter the data, its not difficult & and the user typically doesnt notice the 1 frame lag

extrapolating and testing against reasonable boundaries will add to the processing load
what?
0.00001 msec, if that, I sometimes think that ppl that arent programmers have no idea how fast CPU's are are these sorts of things
 
'Smoothing' based on future frames adds latency; extrapolating and testing against reasonable boundaries will add to the processing load
Of course it will, but you spend the processing load to give a better experience. I doubt the increase in processing load to solve explosive misplacements would be that high. And you need a more reliable skeleton for gameplay reasons. What if your leg suddenly spazzes out and kicks an object you were wanting to avoid? We wouldn't tolerate false inputs from a standard controller, so tolerating them in a motion controller when they seem rudimentary to fix strikes me as an odd double standard. Why hold camera input to a lower level of reliability when it's capable of better?

Not every game or demo will require the same level of glitch elimination. You don't know what features or goals these demos have, or what the Kinect SDK allows, and yet are attempting to draw an absolute conclusion about the technology and the skills of the developers.
That's true, but at this point the assumption is that the Kinect libraries are doing all the work and are standard across the board. The video is from Project Spark, so is intended for release, by Microsoft themselves, as a showcase title. And the scene is only motion capture, so all the local resources can be thrown at solving the problem. I don't see why they'd dial down the accuracy for this project when the motion capture is the only thing happening.

I think it a fair assumption that this demo would be attempting to achieve the best possible.

I doubt anyone with an opinion of any value thinks that.
That's out of order. You may not like zed's opinion, but he's entitled to it, and he's entered the discussion with a view that he'll argue (perhaps a little aggressively in basically calling the Kinect developers noobs).

Please, everyone, keep the debate on the subject and the technical arguments. Why is Kinect still getting tracking issues? Are these problems solvable, and if so, how?
 

Please, everyone, keep the debate on the subject and the technical arguments. Why is Kinect still getting tracking issues? Are these problems solvable, and if so, how?

Because it's one camera, you'll need 2-3+ camera from different angle to have a good tracking system (blind spots), it's a fundamental error. Kinect uses virtual points on players body to create a skeleton model, what would happen if someone poses strange gestures and make some of this spots hidden from Kinect or put them where they are not supposed to be?

Another problem would be that characters animation is limited on project spark or any other game based on characters model. Since they aren't 1:1 match with players body (skeleton model), they can't be used for 1:1 capturing animations. You may put your hands on your knee but character hands may go where ever they wants. Kinect is using skeleton tracking model, not sth like this:

ilm-mocap.jpg

or this (Hulk):
ea2753226b2f28e6ded5e05caa4c44c2.jpg

This could be solved by not using 1:1 transition method between tracked player skeleton and characters skeleton animations, but then there will be no 1:1 motion capture functionality but results may be better than what we are seeing on project spark.
 
Last edited by a moderator:
Because it's one camera, you'll need 2-3+ camera from different angle to have a good tracking system (blind spots), it's a fundamental error.
I disagree. As far as I can tell, the system works by tracking a few key joints and putting them where they appear in 3D space according to the camera, and then using inverse kinematics to place the rest of the skeleton. The spaz-outs come from a misread and placing of a joint in the completely wrong position. Yet these abrupt transitions are clearly impossible within the scope of the skeleton, so the software should reject those samples as erroneous and just use a tween from a previous motion vector. That is, every frame get the positions of the joints and compare them to the motion of the previous frame or two. If the joint is sensibly on that motion vector, it's valid. If its grossly different from the expected position, it's clearly an erroneous placement so use the interpolated sample.

There's also a question of what causes the false reading.

Either that or I'm completely misunderstanding the way Kinect works and it's doing something very different!
 
I disagree. As far as I can tell, the system works by tracking a few key joints and putting them where they appear in 3D space according to the camera, and then using inverse kinematics to place the rest of the skeleton. The spaz-outs come from a misread and placing of a joint in the completely wrong position. Yet these abrupt transitions are clearly impossible within the scope of the skeleton, so the software should reject those samples as erroneous and just use a tween from a previous motion vector. That is, every frame get the positions of the joints and compare them to the motion of the previous frame or two. If the joint is sensibly on that motion vector, it's valid. If its grossly different from the expected position, it's clearly an erroneous placement so use the interpolated sample.

There's also a question of what causes the false reading.

Either that or I'm completely misunderstanding the way Kinect works and it's doing something very different!

Microsoft research main article:
http://research.microsoft.com/pubs/145347/bodypartrecognition.pdf

See page 14 for failures:
http://pages.cs.wisc.edu/~dyer/cs540/notes/17_kinect.pdf

Other example:
http://www.cs.berkeley.edu/~akar/cs397/Skeletal Tracking Using Microsoft Kinect.pdf (Page 10)
 
It's quite apparent that the capturing isn't 1:1 yet. Kinect appears to get more of a best-fit pose, and the movement of the limbs isn't anywhere near as fluid as the source material. We also see physics glitches like the feet passing through the floor. Realtime body capture still has a way to go. I wonder what the limiting factor is?

Of course not. But one does have to wonder what's going on with the glitches. It seems like it should be easy enough with software to test for impossible movements and choose one's poses accordingly. If the skeleton jumps from standing to a leg flying out to the side in one frame, something has gone wrong and the choice of position should be something more realistic. That's the view of the armchair developer anyhow.
The feet clipping with the floor shouldn't be due to Project Spark being a beta rather than Kinect. I mean... The floor shouldn't necessarily have anything to do with Kinect..., that's a weird glitch.

The video Tommy has posted is pretty cool, but it doesn't push the capabilities of the hardware as much as the Gangnam Style video, imho.

I am not a programmer, but Kinect could a saviour for amateur developers who can't draw animations on games. They could create the animation with Kinect.

Say you create a golf game but you don't have an artist to draw the animations for you and they suck. You use Kinect to simulate you are hitting the ball and voila, there you have it.

Quick and the most natural thing ever, without needing a mo-cap studio.
 
The feet clipping with the floor shouldn't be due to Project Spark being a beta rather than Kinect. I mean... The floor shouldn't necessarily have anything to do with Kinect..., that's a weird glitch.

The video Tommy has posted is pretty cool, but it doesn't push the capabilities of the hardware as much as the Gangnam Style video, imho.

I am not a programmer, but Kinect could a saviour for amateur developers who can't draw animations on games. They could create the animation with Kinect.

Say you create a golf game but you don't have an artist to draw the animations for you and they suck. You use Kinect to simulate you are hitting the ball and voila, there you have it.

Quick and the most natural thing ever, without needing a mo-cap studio.

This would be a very cost effective way for indie devs to do animations. It would be even better if the recorded animations could be imported to an editor of sorts that allows the dev or artist to modify it to meet their needs. The base animation is there and can be tweaked instead of being done from the ground up.
 
The Satellite Reign people did some mocap stuff with X360 Kinect

https://www.kickstarter.com/projects/5livesstudios/satellite-reign/posts/631898

Back when the five of us were throwing around our first ideas for the Kickstarter, I starting looking into what was being done with the Xbox 360 Kinect sensor. Some very clever people had managed to get their PCs to take the depth-data from the Kinect, and use it with standard 3D animation packages to produce homebrew mocap software, without the need for big stages or silly lycra suits.
 
Actually he tried using Kinect but it wasn't good enough & he ended up using 4 PS-Eyes to do the mocap.

Yep - if you read thru the link provided its makes it clear why the original Move controller worked so well. It would interesting to see if Kinect2 is able to work better.
 
This new facial recognition technology of Kinect 2 is awesome. Just imagine if you could use it in Mass Effect, Dragon Age or Skyrim !


No need to try to fight the impossible to create a character that looks like you anymore, when you can't with the tools developers used to give you.

Now with this...
 
I don't know whether I should or shouldn't be surprised, but according to the GDC session schedule, there is only 1 Kinect session (by MS) and it doesn't seem like there's anything by any other dev outside of MS.
 
Any news on when Kinect for PC is out?

Summer 2014 is the only info I can find ...
Same here.
I don't know whether I should or shouldn't be surprised, but according to the GDC session schedule, there is only 1 Kinect session (by MS) and it doesn't seem like there's anything by any other dev outside of MS.
Alas, not surprised. Developers are after the money and the core gamers are the early adopters. I wouldn't expect anything related to Kinect from a company that isn't Microsoft.

You know... it's not their job to promote Kinect and make it worth it, but a little education and a game -or games- that can get all the attention, then if the money is there you will see developers swooning over it.
 
Any news on when Kinect for PC is out?

Summer 2014 is the only info I can find ...
IT seems to be finalised. Some interesting comments on the news.


http://www.eurogamer.net/articles/2014-03-28-this-is-microsofts-kinect-for-windows-v2-sensor


I had seen at least one of these videos before, but here they are:


Surgeons using Kinect:




Motion capture with Kinect:




Kinect uses for visually impaired people.



Still... nothing out of the ordinary when it comes to games, for now.
 
This review imho, really sums up my greatest worry about the kinect 2,that i had since the announcement, it's way better than kinect 1 but simply not good enough to convince me that it's a game (pun) changer.

'Kinect Sports Rivals' review: get your head in the game http://www.theverge.com/2014/4/7/5589534/kinect-sports-rivals-review

Still doesn't mean I won't get the game nor the machine but imho it's a perfect example of a evolution of an unperfect technology that still isn't perfected.
 
Back
Top