Kinect technology thread

Same goes on with kinect/primesense technology. As it is actually applied to the actual interfaces, it does only mimic mouse or keyboards input scheme.
So for the moment, it is experimental and useless.
You're mention of Dance Central completely overturns this POV! You can't play Dance Central with KB+M. Even if you mapped the potential moves to keyboard shortcuts, you'd lose the fun of the title. The virtual puppeteering is a unique aspect to Kinect and where it's strength lies, not in a motion based traditional interface. I'd love Kinect with LBP2 to record better virtual actors. I don't think camera based tech can manage it, after trying the Kung Fu Live demo last night. It favoured a white door over me, and a shadowy corner being removed from the background meant a null zone where I couldn't go. It's difficult to see where stereoscopic cameras could be used for 3D when they will have the same contrast issues and be unable to track low contrast overlaps. Two cameras seeing virtual the same image will have the same sort of troubles.

Then again, given stereoscopic vision you could pick a distanct and align the two images to that distance. You'd then get a sharp image at that distance and the rest would be ghosted. Selecting from the resultant image by sharpness should provide distance-based isolation.
 
You're mention of Dance Central completely overturns this POV! You can't play Dance Central with KB+M. Even if you mapped the potential moves to keyboard shortcuts, you'd lose the fun of the title. The virtual puppeteering is a unique aspect to Kinect and where it's strength lies, not in a motion based traditional interface. I'd love Kinect with LBP2 to record better virtual actors. I don't think camera based tech can manage it, after trying the Kung Fu Live demo last night. It favoured a white door over me, and a shadowy corner being removed from the background meant a null zone where I couldn't go. It's difficult to see where stereoscopic cameras could be used for 3D when they will have the same contrast issues and be unable to track low contrast overlaps. Two cameras seeing virtual the same image will have the same sort of troubles.

Then again, given stereoscopic vision you could pick a distanct and align the two images to that distance. You'd then get a sharp image at that distance and the rest would be ghosted. Selecting from the resultant image by sharpness should provide distance-based isolation.

Do really Dance Central and KungFu Live be comparable ?
Ok, they look to be the same input scheme : moving in front of a camera !
But that's it ! One game let the player puppetmaster an avatar in an action game, the other game swap the puppetmaster role between the player and avatar.

Dance Central do show a virtual dancer, and the real player (the puppet in this case) have to match his moves, Kinect device just doing 'match or not matching move' in the time frame defined. Am I the only one who understand that in front of Dance Central, if I do my own dance moves, the virtual dancer ingame do not copy my moves ? Ar am I totally wrong on this one ?

KungFu live, on the other hand let the real player puppet-master his video-captured avatar. This does imply a totally other way to manage the video feed and input commands the player want to echo ingame than Dance Central !

For the moment, this system has not been seen in any Kinect game. Of course, the dual capture system that Kinect/Primesense is powered should solve easily the problem you encountered in the KungFu live demo, as the system detect the gamer with the depth camera, and should be able to copy/paste a 3d capture of the player ingame with almost no false positif player detection.

I do not want to defend KungFu Live, as it is a tiring game, physically, and really limited gameplay system. Kick you enemy, slap them, punch them... Board, a game pad with 3 buttons was giving use more moves with less pain...
I do not critic Microsoft idea of hand free gaming, I blame the poor input scheme we get for the moment with all the camera/optical capture devices available.
 
Do really Dance Central and KungFu Live be comparable ?
You seem confused, as i wasn't drawing any parallel between game interfaces. The technology of Kinect is unique in enabling player tracking. Whether that tracking is used to see if the player matches a predetermined pose, or whether that tracking is used to control an avatar directly, is down to the software. The point is, Kinect lets you do that where other systems can't. Kung Fu Live on Kinect could use an avatar and do away with the noisy, messy video feed, but still track the player's combat just the same, which is what I was getting at.

I do not want to defend KungFu Live, as it is a tiring game, physically, and really limited gameplay system. Kick you enemy, slap them, punch them... Board, a game pad with 3 buttons was giving use more moves with less pain...
I do not critic Microsoft idea of hand free gaming, I blame the poor input scheme we get for the moment with all the camera/optical capture devices available.
From the sounds of it, you just don't like motion gaming, at which point no camera based solution will please you. The joy of motion gaming comes from the act of the play, and not the outcomes. If you are looking at a game and thinking, "I could press a button to return that ball with far better response and less effort than swinging a virtual racquet," that view will always be true, but it will also be a different experience to the motion game experience, and not comparable nor interchangeable.
 
My idea is : I think hand free input device will be usefull in virtual reality setup :
AC89-0437-20_a.jpeg


We will get rid of all the body detectors and only need 3d enabled video glasses and a headset in order to dive in the other world.

Another example of useless technology brought by camera input device in nowadays TV/monitor setup : Head tracking. In front of rather small TV set (under 52" or even bigger) it is useless, as turning left or right my head will only make me not looking look through my corrective glasses, or at least not easing the view of the screen..
Even if the head movement can be slight, what is really the point with this type of head tracking (GT5 ?) ? Looking on the side of the car ? looking over a shoulder or a corner ? I has to be think well, and be able to detect if the player really want to this move at this precise moment, etc...
 
You seem confused, as i wasn't drawing any parallel between game interfaces. The technology of Kinect is unique in enabling player tracking. Whether that tracking is used to see if the player matches a predetermined pose, or whether that tracking is used to control an avatar directly, is down to the software. The point is, Kinect lets you do that where other systems can't. Kung Fu Live on Kinect could use an avatar and do away with the noisy, messy video feed, but still track the player's combat just the same, which is what I was getting at.


From the sounds of it, you just don't like motion gaming, at which point no camera based solution will please you. The joy of motion gaming comes from the act of the play, and not the outcomes. If you are looking at a game and thinking, "I could press a button to return that ball with far better response and less effort than swinging a virtual racquet," that view will always be true, but it will also be a different experience to the motion game experience, and not comparable nor interchangeable.

Ok, so do forget all my post. I was confused on this point.
At this moment, I agree with you that Kinect/Primesense devices are the only system that are reliable enough to track a whole body in realtime and input all this onscreen in realtime, or almost.

Now, I am waiting something that wows me and, will wow me in the long run.

About my motion gaming like or dislike, I think it is rather the limited feedback I get with all the project I see.
Battling with virtual swords still strikes me because you do not have physical response when touching something in the virtual world. THis is only an example, but even with SONY's Move or Wiimote feedback (device rumbling, device weight) , I am frustrated by the lack of resistance and feedback.

I played Tumble with move a few hours now, but it still frustrating to put plastic cubes over glass cubes with only a little rumbling, but no feel of friction or robustess but visually. I play with woods cubes with my little 2years old daughter and the feeling is totally different, and really more challenging than Tumble.

Using a gamepad/keyboard let me abstract all this, meanwhile I get access to a loads of ingame action, with 100% acurate reponse time. But I am getting old and stupid. So do not care too much about my POV.
 
My idea is : I think hand free input device will be usefull in virtual reality setup :
AC89-0437-20_a.jpeg


We will get rid of all the body detectors and only need 3d enabled video glasses and a headset in order to dive in the other world.

Another example of useless technology brought by camera input device in nowadays TV/monitor setup : Head tracking. In front of rather small TV set (under 52" or even bigger) it is useless, as turning left or right my head will only make me not looking look through my corrective glasses, or at least not easing the view of the screen..
Even if the head movement can be slight, what is really the point with this type of head tracking (GT5 ?) ? Looking on the side of the car ? looking over a shoulder or a corner ? I has to be think well, and be able to detect if the player really want to this move at this precise moment, etc...

Have you ever used a HMD (head mounted display) like that for extended periods of time? It's extremely uncomfortable, cumbersome, and annoying. Once the novelty wears off you actually dread being forced to use it.

There's also various lighter HMDs that I've used and that just makes them less annoying but also come with drawbacks of their own when compared with full isolation and tracking HMDs, but again after the novelty wears off, it's just easier, more pleasant, and more convenient to use a traditional 2D display.

Devices with the potential to "paint" the image directly onto the retina could solve most of the drawbacks with HMDs, but they have significant problems of their own to overcome that have so far prevented them from being commercially deployed. I had the chance to try one back in 1997 at the University of Washington where substantial work was being done in Virtual Reality.

And gloves/bodysuits without a means for recalibration or full time absolute positioning can gradually lose accuracy and correct virtual positioning. Thus they require far more equipment than what you see in that picture. Which also increases the costs associated quite significantly.

Kinect by itself allows for much of that functionality at a far FAR lower cost, which is what allows it to be a compelling consumer device, and as we're seeing with the massive experimentation going on with it at Universities, companies, and homes makes it particularly attractive in the many diverse fields of study which traditionally had to rely on far more expensive equipment to achieve similar functionality.

[edit] Bah, just realized you were thinking of Kinect in conjunction with a HMD. So ignore the part not dealing with HMDs in response to your post. :p

Regards,
SB
 
Another example of useless technology brought by camera input device in nowadays TV/monitor setup : Head tracking. In front of rather small TV set (under 52" or even bigger) it is useless, as turning left or right my head will only make me not looking look through my corrective glasses, or at least not easing the view of the screen...
It's not useless. Uses are limited, but not nonexistent! You're right, rotating the head isn't great. However people naturally move their head in response to camera views. Tracking and updating the camera accordingly, via planar displacement, would be a big benefit. eg. Craning up in your seat to look over a hill, or to the right to get a better view around the corner.

About my motion gaming like or dislike, I think it is rather the limited feedback I get with all the project I see.
Battling with virtual swords still strikes me because you do not have physical response when touching something in the virtual world. THis is only an example, but even with SONY's Move or Wiimote feedback (device rumbling, device weight) , I am frustrated by the lack of resistance and feedback.
Lack of tactile feedback is always going to be an issue, but virtual reality with full haptic feedback is (probably) a long way off, and at the moment is most likely to require lots of gear. At the moment if you want those sorts of experiences, you're should really do the thing for real! These new interfaces aren't trying to put you in a virtual environment per se, but to give a different, more natural interface to a game. Some of the PS3 Move tech demos showed the best possibilities IMO. The original archery demo shooting skeletons shows a unique game interface that a controller can't provide. Some of the Kinect demo are pretty extraordinary too. Looking at things like virtual lightsabre duals, yes, the lack of haptic feedback is an issue - you couldn't have a proper lightsabre dual in game with 1:1 because you couldn't stop the player passing their sabre through their opponent's. Change the gameplay so that you never encounter any object that can resist a sabre though, and it'll work flawlessly and be much more involved and entertaining for Jedi wannabes than dual-stick lightsabering.

This gen is a significant step towards the end result that you're after, and I don't think what you're hoping for is a realistic consumer experience for a long while yet.
 
Some kind of mech kinect game looks like a nice preview of how steel battalion could play like.


That looks actually playable.

I can't wait to see what kind of control schemas developers are going to create given more time to better prototype what works and what doesn't.
 
thats "armored core for answer" developed by from software (demon's souls, tenchu, armored core)

pretty neat that they prototype these control schemes on existing games (racing wheel on burnout paradise)
 
I'm not excited about avatar kinect at all, but if they do that kind of facial recognition in online games, even games that use other player models besides the avatars, it'll be really cool.
 
I'm not excited about avatar kinect at all, but if they do that kind of facial recognition in online games, even games that use other player models besides the avatars, it'll be really cool.

That could take a good while. For now, Ms won't allow anything other than avatars to mimic player movements while playing, and it will most likely be the same for facial tracking.
 
The Tech Behind Avatar Kinect
http://www.youtube.com/watch?v=t0ot_3q-pSA&feature=player_embedded

This guy seems to be doing the same thing with a simpler mesh on top of the face. Is the Kinect version less prone to errors?
http://www.youtube.com/watch?v=Phcdp7AjhBg&feature=player_embedded
The Avatar video describes facial recognition, which I imagine is principally optically based. The depth details don't give you much when reading a face. The nose would be a nice reference bump, but the eyes are a perfect matching point anyway. You'll get better pespective match when the face isn't facing screen, like in the vid when he tips his head up, but triangulating from the eyes and mouth copes with such cases well enough. I don't know how much improvement the 3D can add, or how much easier it makes facial tracking. Optical tracking isn't particularly new though. The old Toshiba Magic Mirror demo was a half-way inbetween with overlay rather than complete head modelling, which is another optical demo people have forgotten! The addition of Kinect's voice recognition may help with accurate mouth tracking as the audio would prompt one of a few mouth shapes. Like a lot of tech though, it's not so much that Kinect's allowing a first, but it's managing it with a level of interest and mass-consumer support that may drive it forwards. As an addition to Kinect's unique virtual puppetry as well, it's a better fit than other applications where it'd just be added to a conventional experience.

Although in this case I can't clearly see the application. My immediate thoughts go once again to LBP2 - I would dearly love virtual puppetry for full control of the sackbots along with mouth tracking. If movie levels do well in LBP2, I'd expect to see it in LBP3. For Kinect's current and future game line-up, I'm not sure what head puppetry provides, but it's definitely a cool head tracking demo. Likewise it'd be a great addition for Home where one of my first and strongest complaints is how everyone looks like a robot. However, sending that much puppetry data for all users could get expensive in terms of network use. Anyone think that could be a limiting factor for network adoption in an MMO type situation?
 
The Tech Behind Avatar Kinect
http://www.youtube.com/watch?v=t0ot_3q-pSA&feature=player_embedded

This guy seems to be doing the same thing with a simpler mesh on top of the face. Is the Kinect version less prone to errors?
http://www.youtube.com/watch?v=Phcdp7AjhBg&feature=player_embedded

Its nice that they are using both cameras. That way they can get head movements with the depth sensor, and probably means less burden to track face features as they already know where the head is.

If the depth res isn't enough i guess they might do something like that to track hand gestures. They can see through the depth image to know where the hand is, and use the higher res rgb feed to try to guess the gesture.
 
That type of facial rigging would obviously be most useful in an MMO game, where people avatars stand around a lot doing nothing particularly interesting, just chatting. Not the kind of game I'm interested in, but I think that would even be cool in a game like Battlefield. In general, it would be cool to see the expressions on peoples faces in an up close fight, or to see them grinning on a kill cam. Obviously it would add nothing to the game play, but it would be a nice little immersion feature. It would also work great for sports titles where there are a lot of replays, especially on Kinect Sports.

I guess we'll figure out how reliant it is on the depth camera by testing it out in dim to dark environments.
 
At a guess, knowing how we do facial recognition, I'd say that it's going to be colour camera dependent, so won't work in dim conditions.
 
At a guess, knowing how we do facial recognition, I'd say that it's going to be colour camera dependent, so won't work in dim conditions.
What advantages can you see the depth camera providing when used alongside the optical recognition, if any?
 
What advantages can you see the depth camera providing when used alongside the optical recognition, if any?

I would imagine:
Easier (faster) recognition of regions to be analyzed (bounding boxes in 3D).
Changes in depth can be used to calculate hints about scaling (ie. moving away->face gets smaller, close->face gets bigger).
Supression of optical noise (ie. remove background by depth keying).

Cheers
 
At a guess, knowing how we do facial recognition, I'd say that it's going to be colour camera dependent, so won't work in dim conditions.

I was hoping the facial mesh was using the depth camera and would help locate eyes, mouth and make it easier to recognize features in dim conditions. From the mesh you know roughly where those features are located, so the "optical" portion of the facial recognition would be easier. Otherwise I'm screwed on this, because my living room is pretty dark.
 
Back
Top