Kinect technology thread

However the patent says implicitly that Kinect...
It says nothing implicitly about Kinect, because the patent covers an area of technology, and not a specific device. The term 'embodiment' is used to talk about an implementation, and any patent worth its salt will describe several variations, none of which need to exist, but which may someday be made in which case the patent will be used to ensure either it isn't, or it is made by the patent holder, or someone else to license the technology.

Interestingly the patents claims, which are the meat of a patent, aren't related to the tracking technology at all. Claim 1, the main claim from which all others are derived, speaks only of text character input using gestures...

Systems, methods and computer readable media are disclosed for gesture keyboarding...

1. A method (note not apparatus, and 3D isn't required) for providing keyboard-like input to a computer system that accepts gesture input, comprising: receiving data comprising at least one image of a user creating an input gesture; and parsing the data to determine at least one gestured character.
The patent is thus for anyone wanting to use image-tracked gestures to type content into a computer. Nothing about that covers the specs and capabilties of Kinect.
 
The patent is thus for anyone wanting to use image-tracked gestures to type content into a computer. Nothing about that covers the specs and capabilties of Kinect.

Since Kinect will be able to interface with Windows 7, that brings finger tracking and thus sign language into play for desktop computer use. Considering that when used in conjunction with a desktop computer your fingers and facial features will be much closer to the camera and thus far easier to distinguish facial feature and each individual finger.

Likewise, a desktop computer, even a budget one will have more resources available to devote to Kinect.

Imagine for instance video chat that can automatically translate sign language. You could then have video/voice chat between a deaf person and a blind person as the capability already exists for speech to text.

Ah, but a deaf person could just type out their message you say. Well, just like normal people, most people aren't very fast typists and can often speak or sign (in the case of a deaf person) much faster than they can type.

When taking that patent into consideration, we shouldn't limit the discussion of capabilities to only those that may be seen on consoles, although that's certainly the primary focus at launch.

Regards,
SB
 
@graham I assume u mean the black stuff, well for me that saiz a null reading.

No. I imagine the black is simply clipped outside the desired depth range (noise probably becomes too much of an issue?)
My comment was in response to Shifty's statement:
Shifty Geezer said:
I notice the back of that couch is glowing, placing it's depth completely wrong.
 
Since Kinect will be able to interface with Windows 7...
When taking that patent into consideration, we shouldn't limit the discussion of capabilities to only those that may be seen on consoles, although that's certainly the primary focus at launch.
Yes, I should have said the patent doesn't cover abilities of Kinect as connected to XB360. The patent itself is covering any such inputs. If anyone creates software to read sign language through a laptop webcam, this patent will be raised.

No. I imagine the black is simply clipped outside the desired depth range (noise probably becomes too much of an issue?)
My comment was in response to Shifty's statement:
Okay, I just noticed the gradient is repeated! this visualisation caught me out, and so depth looks good. I thought it was an issue with reflectivity or something. I wonder what the brightness threshold is? the Engadget vid had issues with the guy in black, but that was also under studio lighting. Would black work okay in a living room, lights out, and how dark can things go and still reflect enough IR to be tracked?
 
Very interesting writeup. But something I think is missed by many overviews of Kinect that I've seen is the impact on children.

If you talk to parents/grandparents it's not uncommon at some point to discuss how children now days are so into video games that they don't get as much physical activity (playing outside) as perhaps they should. Especially as children's bodies are still developing and require a lot of activity.

Something like Kinect, which forces some level of physical activity in order to interface with its games I think is a good thing overall. And might be quite attractive to those parents/grandparents that think their kid should be doing more physical activities. The Wii was a good step in that direction with the whole arm flailing, but it wasn't something a kid had to do. I've even heard some kids say that other kids "cheat" by not "really using" the Wii mote motion controls.

But other than that. Yes, spot on there are things Move and Wiimote can do that would be difficult or impossible on Kinect. Likewise there are things you can do on Kinect that are difficult or impossible with Move (+camera) and even moreso with Wiimote.

Regards,
SB
 
No. I imagine the black is simply clipped outside the desired depth range (noise probably becomes too much of an issue?)
Its a null reading (i.e. kinect couldnt accurately read the depth)
the only way those parts were outside its range is if the couch/floor had holes in it :)
 
That's definitely possible, tracking the end-joints of the arm skeleton as hand position. Of course, you'd have to do all the Kung-Fu yourself for real...
 
The on-screen avatar motion in Kinect Adventures is one-to-one - when I move my arm, my avatar moves her arm in exactly the same manner across the entire body. We call it 'avateering'. It’s a new experience for people, and some people interpret that sensation as lag.

Er, the sensation of moving your arm and the Avatar moving her arm 1/5 of a second later definitely is lag!
 
Hm, yeah...everyone who tested it in person should immediately feel that there is lag, it was quite obvious during my experience.

An interesting point in this interview, which I did not pay much attention during my short tests:

Is the lack of physical feedback in Kinect titles a problem?

This seems to be a very important question if you think about it!
Just play a PS3 game with the SixAxis only and than use the DualShock and play again!

So how can you give the player feedback in Kinect games?
Basically only via audio or visuals, right?!
 
Last edited by a moderator:
Hm, yeah...everyone who tested it in person should immediately feel that there is lag, it was quite obvious during my experience.
The feeling of lag goes away over time as your movements tune themselves to the positive feedback loop. At least it did with me. It's the same with a normal controller, there can be upwards of 150ms of lag with a standard controller (Resistance: FoM I believe is up there), and yet we still manage to compensate after a while. It's especially noticeable on timing based games like Limbo, until your brain uses the feedback loop to recalibrate your fingers :)
 
I have to compensate for control lag everytime I pick up a new game on console. especially if it features jumping or shooting without aim-assist.

Hell, part of aim-assist on consoles is not only to compensate for using an analog stick but also to help hide the control latency.

In fighting games I find I often have to buffer moves in my head as the controls never execute moves immediately.

Regards,
SB
 
So how can you give the player feedback in Kinect games?
Basically only via audio or visuals, right?!

Yes, and many interviews (such as the Edge one previously posted) mention this. Basically, it comes down exaggerating the feedback visuals/audio significantly.
What's interesting to me is how a lot of companies are becoming very scientific about human behavior and responses in these games.
Pretty much all the interviews in edge mentioned this to varying levels.
 
Technically-speaking, wouldn't it be possible if you sit close enough, and write your own tracker ? ...assuming Kinect can supply raw camera footage, and doesn't have any "homebrew" limitations.


The problem would be how to control the rest of the game, and the speed of recognition before rival ninjas beat you to a pulp. :p

EDIT: Hmm... I think the developer may also need low level access to the GPU if he wants to do the tracking on the GPU ala the PrimeSense implementation on Xbox 360. Not sure how the private implementation interacts with the Dashboard since the latter can barge in upon user request.
 
The feeling of lag goes away over time as your movements tune themselves to the positive feedback loop. At least it did with me. It's the same with a normal controller, there can be upwards of 150ms of lag with a standard controller (Resistance: FoM I believe is up there), and yet we still manage to compensate after a while. It's especially noticeable on timing based games like Limbo, until your brain uses the feedback loop to recalibrate your fingers :)

How much is Kinect lagging exactly ? more than 150ms ? Does the lag cascade ? i.e., worse for larger movement. Has anyone measured a Kinect gesture lag ? (e.g., pressing a virtual button mid-air to activate Nitro in a racing game)

In a "buttons" game, it's usually just a small, constant "trigger" lag. Do you have a link to the RFOM lag ?

In both cases, the game will be designed differently to accommodate the acceptable range of user response. It's really up to the users to see if they like the resulting games.

EDIT:
Yes, and many interviews (such as the Edge one previously posted) mention this. Basically, it comes down exaggerating the feedback visuals/audio significantly.
What's interesting to me is how a lot of companies are becoming very scientific about human behavior and responses in these games.
Pretty much all the interviews in edge mentioned this to varying levels.

The rumble-less SIXAXIS has the same problem. PS3 launch games relied on AV feedback. I liked it because there was no abrupt vibration messing up my aiming. After I switched to DS3... if the rumble is too wild, I would activate R3 accidentally (trying to hold down my reticule) and die mid-battle. :)

Most rumble implementations I tried are not so helpful, with a few exceptions... like Demon's Souls excellent Skeleton Warrior implementation. It felt like fighting a rock-hard enemy when I hit one. Their counter-blows were equally devastating. Unfortunately, such attention to details is usually lacking.

The real missing element may be control and continual feedback. (e.g. a pressure sensitive button and resulting feedback -- like holding on to my teammate/AI buddy, preventing him from falling off a cliff), not merely random rumble due to taking hits. In the latter case, it's helpful when you're distracted by external or other in-game elements. In the former case, it can be used to implement gameplay mechanics.

The other thing is "floaty arms". This does not come into play with large movement games such as dancing, jumping, etc. But for fine-grained control, sometimes I find that it's better to add artificial weight to my hands while I do controller-free gaming. It feels more connected. It can be easily done by holding on to something, or just use another hand to support the "floaty" controlling hand (to prevent it from shaking too much :LOL:). In all of these cases, it can still be fun to play. And that's all that matters.
 
Back
Top