Kinect technology thread

Yes, and many interviews (such as the Edge one previously posted) mention this. Basically, it comes down exaggerating the feedback visuals/audio significantly.
What's interesting to me is how a lot of companies are becoming very scientific about human behavior and responses in these games.
Pretty much all the interviews in edge mentioned this to varying levels.

I'm still not sure why that's such a big deal. Then again playing on PC for the majority of my gaming there is only visual and audio feedback for virtuall ALL games.

When I hop on console, I notice there's rumble, but it's something that to me is equally positive and negative. And in the end, it's overall a rather neutral experience.

I don't find the lack of "physical feedback" to be a lack on PC for example. In fact, it could be argued that Kinect will feature more physical feedback in a properly designed game than a controller with rumble.

Which is a more realistic feedback loop? Pressing a button to jump and feeling your controller rumble when you land or jumping and actually feeling the physical force as you land? Pushing an analog stick to run and feeling nothing or doing running motions and feeling the increased heartrate, raised temperature, etc. of cardio?

On the other hand, you obviously wouldn't be able to present feedback for something like being shot or punched. But that only points out the physical feedback will be different rather than non-existant as detractors like to point out.

And again, I have no physical feedback to being shot or punched on PC either, and yet the games are just as enjoyable if not more so due to...the control method. Halo 1 and 2 with keyboard and mouse pretty much represented a far superior experience versus Halo 1 and 2 with rumble on a controll pad for me.

Regards,
SB
 
not RFOM but some other games
http://www.gamasutra.com/view/feature/3725/measuring_responsiveness_in_video_.php?page=3
3 frames = 50msec
10 frames = 167msec

Cool, looks interesting. Will read it.


I'm still not sure why that's such a big deal. Then again playing on PC for the majority of my gaming there is only visual and audio feedback for virtuall ALL games.

When I hop on console, I notice there's rumble, but it's something that to me is equally positive and negative. And in the end, it's overall a rather neutral experience.

I don't find the lack of "physical feedback" to be a lack on PC for example. In fact, it could be argued that Kinect will feature more physical feedback in a properly designed game than a controller with rumble.

Compared to a controller-ful games, rumble is not so much a value add. You already have something to knock (keyboard) or hang on to (mouse and pad). It's like DS3 vs SIXAXIS. In the Demon's Souls Skeleton Warrior example, the tactile feedback gave an after-taste to the the combat experience. After my fight, I can vaguely remember the experience (in both my hands and my mind). It also made the combat a little more intense/immersive, but overall I agree it's an ok value-add.

Compared to controller-free games, the lack of fine-grain control and zero tactile feedback can make some games feel "empty/floaty". Whether it's a problem depends on how the game is designed.

Which is a more realistic feedback loop? Pressing a button to jump and feeling your controller rumble when you land or jumping and actually feeling the physical force as you land? Pushing an analog stick to run and feeling nothing or doing running motions and feeling the increased heartrate, raised temperature, etc. of cardio?

Depends on the game you play ! People may not expect to jump around in many games. Excessive movement and extra feedback may be irrelevant (or a distraction) to the game objectives (e.g., pressing a physical button may be more effective and satisfying than finger gesture to pull an imaginery trigger).

In a sports game, some may prefer a sweat-free experience whereas some may want more exercise.

Btw, you don't really need to monkey around to sweat in motion gaming. In my experience, playing something like Creature Feature can be very tiring because the "fine grained" control stress my arms more than flailing around. This is especially true on a big screen TV. When I play the same game on my office PS3, I can lean back and control the lemmings rather quickly. In the living room, we need to walk around and stay bent (so that our hands fit in the screen at strategic places). All the controlled movement can cause backache after a while. :LOL: I still go back to the game from time to time regardless, coz my son and I think it's fun to fool around with it alone or together.

I think it's always down to the kind of games you want to play/design.

On the other hand, you obviously wouldn't be able to present feedback for something like being shot or punched. But that only points out the physical feedback will be different rather than non-existant as detractors like to point out.

And again, I have no physical feedback to being shot or punched on PC either, and yet the games are just as enjoyable if not more so due to...the control method. Halo 1 and 2 with keyboard and mouse pretty much represented a far superior experience versus Halo 1 and 2 with rumble on a controll pad for me.

Not sure how useful it is to use PC games as a reference. For one, motion gaming failed there. it's a desktop experience. It doesn't mean motion gaming is useless. Having tried both controller-free and controller-ful gaming, I think they have different appeals. Having something in your hands (and with feedback) can make the player feel more connected to the game world in many existing games. PSEye and Kinect gaming will have their own type of applications/games.
 
Not sure how useful it is to use PC games as a reference. For one, motion gaming failed there. it's a desktop experience. It doesn't mean motion gaming is useless. Having tried both controller-free and controller-ful gaming, I think they have different appeals. Having something in your hands (and with feedback) can make the player feel more connected to the game world in many existing games. PSEye and Kinect gaming will have their own type of applications/games.

The point wasn't about motion gaming in particular, but that control systems conducive to certain types of games are going to preferred over other control systems regardless of whether it has rumble, analog controls, digital controls, keyboards, mice, physical feedback, whatever...

There are only complains when gametypes not conducive to a control method are forced on non-optimal control schemes. And even that can be overcome through reinforcement, training, and not having any other choice. Console controls for FPS for example. Huge objections and arguments were waged (or perhaps Raged ;)) over it's suitability originally, but with training and forced use (no option for keyboard and mouse allowed), it's become quite normal and is now the preferred method for some people.

Regards,
SB
 
The point wasn't about motion gaming in particular, but that control systems conducive to certain types of games are going to preferred over other control systems regardless of whether it has rumble, analog controls, digital controls, keyboards, mice, physical feedback, whatever...

There are only complains when gametypes not conducive to a control method are forced on non-optimal control schemes. And even that can be overcome through reinforcement, training, and not having any other choice. Console controls for FPS for example. Huge objections and arguments were waged (or perhaps Raged ;)) over it's suitability originally, but with training and forced use (no option for keyboard and mouse allowed), it's become quite normal and is now the preferred method for some people.

It is relevant because in a desktop gaming environment, the player is rather "removed" from the game world. The game world exists in his/her mind and on the screen. Plus they already have 2 precision controllers (mouse + keyboard) to connect to the game world.

In motion gaming, the player is "in" the game. Additional feedback may be more helpful, depending on the game he plays. Besides, in Wiimote+ and PS Move, where necessary, they do have the physical feedback (increased heart rate, physical impact, backache, etc.) we talked about above. On top of that, the controller caters for more fine grained feedback and weight. The gamer designer will have to siphon through this ball of wax to see what experience he/she wants to give the users.

The tactile feedback is part of the integrated experience (Whole > sum of parts and all that). It may not be fair to generalize by isolating just tactile feedback from another context. The experience and expectation could be different.
 
Technically-speaking, wouldn't it be possible if you sit close enough, and write your own tracker ? ...assuming Kinect can supply raw camera footage, and doesn't have any "homebrew" limitations.


The problem would be how to control the rest of the game, and the speed of recognition before rival ninjas beat you to a pulp. :p

I kinda enjoyed making that video because I was told that the PS-eye couldn't track anything at 120fps & that it was only for video (witch didn't make any sense because the video is what's used to do the tracking)
 
I'm still not sure why that's such a big deal. Then again playing on PC for the majority of my gaming there is only visual and audio feedback for virtuall ALL games.
You misunderstand. Feedback here means letting the user know they have successful input an action into the game. You have this same feedback on your PC when you press a keyboard key or click a mouse button. You know you've pressed it because you feel it move. Imagine no keyboard where you have to press virtual buttons in the air. How do you know with confidence if your finger is in the right place, or if you've moved it enough to depress the virtual keyboard? With no tactical response, you have to match everything up with visual/audio feedback from the game. For example, let's say there's a god game where you can pick up people, and if you hold too hard you crush them (taken from the Move tech demo talk by Anton). On Move, you have the progressive resistence of the trigger to tell you how hard you are squeezing, but on a camera-only system, it could only tell by the size of the hand, and you wouldn't know if you were holding too tight or not tight enough, depending on what you are trying to do.
 
Ah, such concise explanation.

Much better than my "rumble is less valuable in controller-ful games" mumbo jumbo ! It's because these controllers already have their inherent feedback mechanism.
 
You misunderstand. Feedback here means letting the user know they have successful input an action into the game. You have this same feedback on your PC when you press a keyboard key or click a mouse button. You know you've pressed it because you feel it move. Imagine no keyboard where you have to press virtual buttons in the air. How do you know with confidence if your finger is in the right place, or if you've moved it enough to depress the virtual keyboard? With no tactical response, you have to match everything up with visual/audio feedback from the game. For example, let's say there's a god game where you can pick up people, and if you hold too hard you crush them (taken from the Move tech demo talk by Anton). On Move, you have the progressive resistence of the trigger to tell you how hard you are squeezing, but on a camera-only system, it could only tell by the size of the hand, and you wouldn't know if you were holding too tight or not tight enough, depending on what you are trying to do.

Ah, I get what your saying. It's not actually physical feedback that everyone is talking about but rather controller feedback. Similar for instance to typing on a virtual keyboard on an Iphone. And thus you need something to let you know your input was received. Letters appearing onscreen for example with the Iphones virtual keyboard.

I can only see this as being an issue if a developer ignores the need for some form of confirmation of a required control input.

Regards,
SB
 
I can only see this as being an issue if a developer ignores the need for some form of confirmation of a required control input.
Yes. As Graham said, developers are looking to audio/visual feedback. In my example, you'd have perhaps the person screaming progressively as you get too tight, and perhaps have a buffer period where if you squeeze to hard they don't pop immediately but you have a window of opportunity to relax your grip and pretend nothing was amiss. On Move, you could do it entirely by tactile feedback on the trigger resistance, so in contrast in the Kinect version, you'd want to pop a person but there'd be a lag as it gave you chance to change your mind, whereas on Move it'd be immediate because a full pull of the trigger is an unambiguous intention.

So though it's not going to limit any games particularly, this limited feedback does introduce design considerations and slow down feedback and game responsiveness in controllerless interfaces, and underlines the thinking for Sony's choices where they had the immediacy of conventional gaming in mind. EyeToy's interfaces had two seconds of delay while you 'charged' a button to activate it, and Kinect's UI's apparently have the same interface mechanic, whereas Move is direct 'highlight the UI control and press the controller button.'
 
Silent_Buddha said:
I can only see this as being an issue if a developer ignores the need for some form of confirmation of a required control input.

It's more than that.

Without a snappy "trigger" mechanism on-hand, the system has to do a round-trip recognition and interpretation. It may not have fine enough granularity to generate timely and consistent feedback even if the dev wants to. This is a limitation of controller-free gaming.

The "lag" here may not be just mechanical/electrical system delay (if you want to measure the end-to-end trigger).

As such, we have only seen relatively large movement games in EyeToy and Kinect so far. Then again, large swings and trigger actions can be fun too. Story telling, acting, exercising, sports, and assorted self expression act seem appropriate.

The other way is via offline input (e.g., recognizing drawing and other media), which can be interesting also. Milo's voice recognition is another way to sidestep the issue, or expand the horizon -- depending on your perspective. But that too has its own limitations.

...or augment controller-free system with a trigger-based controller. If finger tracking is supported, it would add more dimensions to the input too. I remember MS has researched into muscle-based computer interface, perhaps it can be used in the future. Not sure if it has fast trigger though.

Educational or public access seem compatible also due to the accessibility of controller-free interface.
 
How much is Kinect lagging exactly ? more than 150ms ? Does the lag cascade ? i.e., worse for larger movement. Has anyone measured a Kinect gesture lag ? (e.g., pressing a virtual button mid-air to activate Nitro in a racing game)

In a "buttons" game, it's usually just a small, constant "trigger" lag. Do you have a link to the RFOM lag ?

In both cases, the game will be designed differently to accommodate the acceptable range of user response. It's really up to the users to see if they like the resulting games.
Looks like I confused my PS3 shooters. It's Killzone.
http://www.eurogamer.net/articles/digitalfoundry-vs-console-lag-round-two-article?page=3

Killzone goes as high as 183ms, with a minimum of 150ms, which when combined with some plasmas can be more than 300ms total visual lag. Since all the measurements of Kinect so far have been visual lag, and we don't know the inherent lag of the displays involved, we can't say anything about the actual input lag of the device. Input lag appears to be in line with a lot of games though, if you take a display with 50ms lag (which is quite common - mine is about 83ms)

You can see more lag measurements in their other article too:
http://www.eurogamer.net/articles/digitalfoundry-lag-factor-article?page=3

GTA4 can hit 200ms of input lag. Also, your assertion that the controller lag would be constant also appears to be in error. Amazingly enough we all still manage to play games :)
 
According to here: http://news.softpedia.com/news/Rare-Kinect-lag-is-not-a-problem-145032.shtml

Kinect in Rare's sports game has a lag of 150ms without considering the display. If they use gesture recognition like virtual button presses, it'd be far worse compared to a physical trigger.

If it's a flailing game, what happens if I wave frantically (like my kid would) ? In a traditional game like FPS, the lag would be fairly constant because the sticks are quick and have limited movement.

The Kinect developers would recognize the delay shortfalls and work around them.

In KZ2, perhaps this is partly why many people complained about the aiming and control scheme. There were also aim acceleration, and weapon weight in the scheme. Nevertheless, it is not the yardstick for DS3 games. For better or worse, Guerilla mentioned that they have changed the controls to become more like CoD.

Is the Macy Kinect demo still on ? I should make a trip to Valley Fair mall to check it out.

EDIT:
GTA4 can hit 200ms of input lag. Also, your assertion that the controller lag would be constant also appears to be in error. Amazingly enough we all still manage to play games :)

Well, those that fall out of the norms for various reasons will have their fair share of criticisms. Besides KZ2, I remember people bitched about hard to control driving in GTA4 at launch.

As I mentioned, the developers will tune their games to work around the latency. The existing control schemes have also been optimized for trigger-based controllers (e.g., Once you push a stick forward, the character continues to walk or shoot, it may also depend on how fast the game reads the input; over how fast the controller can send the input).

EyeToy and Kinect-like games will need their own gestation period. But there are inherent challenges in the camera-based approach as Dr. Marks has repeatedly mentioned.
 
Yes. As Graham said, developers are looking to audio/visual feedback. In my example, you'd have perhaps the person screaming progressively as you get too tight, and perhaps have a buffer period where if you squeeze to hard they don't pop immediately but you have a window of opportunity to relax your grip and pretend nothing was amiss. On Move, you could do it entirely by tactile feedback on the trigger resistance, so in contrast in the Kinect version, you'd want to pop a person but there'd be a lag as it gave you chance to change your mind, whereas on Move it'd be immediate because a full pull of the trigger is an unambiguous intention.

Well, perhaps not the greatest example of a need for game induced lag (versus controller lag) as if you're trying to pop a person your on-screen hands (or device or whatever) would be visably squeezing the person and you could easily have visible cues for imminent poppage. Eyes bulging out "Total Recall" style for example.

But I get what you're trying to convey. But that just goes back to games that are suitable to the control method. There's always going to be some overlap, but there will always be games that are more suited to Kinect and games that are more suited to Move.

So though it's not going to limit any games particularly, this limited feedback does introduce design considerations and slow down feedback and game responsiveness in controllerless interfaces, and underlines the thinking for Sony's choices where they had the immediacy of conventional gaming in mind. EyeToy's interfaces had two seconds of delay while you 'charged' a button to activate it, and Kinect's UI's apparently have the same interface mechanic, whereas Move is direct 'highlight the UI control and press the controller button.'

Yes this is one of the things that puzzles me. Considering Kinect has an array of mic's with which to do voice recognition, I'm somewhat surprised that for things such as UI selection they didn't include some form of voice activated button pushing. Hell, saying BING when your onscreen hand is over the item you want for example would be both elegant, quick, and suitable. It's obviously not the correct solution for all situations so you'd always have the "hover" selection method also, but it would have offered something as quick as a button push.

Regards,
SB
 
Yes this is one of the things that puzzles me.
What puzzles me more is why they don'thave you able to reach towards the screen to press the virtual button! Seems an ideal implementation of their depth detection, taking EyeToy's onscreen buttons and having you actaully reach forwards to press them. It wouldn't be at all hard in theory - for each hotzone, test if hand is in that location, and then if so test if it's closer than a threshold. If so, button is pressed. Hovering to select ignore completely their potential!
 
How much is Kinect lagging exactly ? more than 150ms ? Does the lag cascade ? i.e., worse for larger movement. Has anyone measured a Kinect gesture lag ? (e.g., pressing a virtual button mid-air to activate Nitro in a racing game)

From all they showed so far their pose estimation is based on image recognition (using the depth camera, but still), and they also showed that for each frame they can generate an appropriate skeleton (ie: The skeleton is always snapped to the picture the camera is showing there's no drifting)... So what it seems is that large movements do not interfere with lag and that the faster the movement, the bigger the gap between your movement and the avatar's movement will be (in terms of distance, the lag in ms is probably constant all the way).

For gesture i'd guess that lag would be less perceptive... Is easier to spot lag when you are tracking 1:1 than when making a gesture, especially since you have to make the whole gesture before.
 
What puzzles me more is why they don'thave you able to reach towards the screen to press the virtual button! Seems an ideal implementation of their depth detection, taking EyeToy's onscreen buttons and having you actaully reach forwards to press them. It wouldn't be at all hard in theory - for each hotzone, test if hand is in that location, and then if so test if it's closer than a threshold. If so, button is pressed. Hovering to select ignore completely their potential!

Yes, that also would have made sense, after all they show something similar in Joyride where you push your hands forward to activate Nitro speed boost. Heck, it may have even been fun if you could "slap" the button.

Regards,
SB
 
Yes, that also would have made sense, after all they show something similar in Joyride where you push your hands forward to activate Nitro speed boost. Heck, it may have even been fun if you could "slap" the button.

Regards,
SB

I would have thought the issue here is when people might accidentally press the button by pointing at the screen or stretching arms - or maybe a sudden move forward being misread?

I think combining sound/word with gesture/button pressing would be ideal.
 
What puzzles me more is why they don'thave you able to reach towards the screen to press the virtual button! Seems an ideal implementation of their depth detection, taking EyeToy's onscreen buttons and having you actaully reach forwards to press them. It wouldn't be at all hard in theory - for each hotzone, test if hand is in that location, and then if so test if it's closer than a threshold. If so, button is pressed. Hovering to select ignore completely their potential!

Wouldn't you have to recalibrate that a lot? It may be hard to keep track of. I think we may see something like this, but it doesn't seem as easy to me as you suggest.
 
I would have thought the issue here is when people might accidentally press the button by pointing at the screen or stretching arms - or maybe a sudden move forward being misread?
Well I don't know about you, but I don't personally have spontaneous arm thrust towards the screen. ;) There's a tiddly chance of a flas positive if the player untowardly puts their hand into the button zone, but there's likewise a chance of accidentally activation trigger buttons on a dual-stick controller if you knock it against your lap/leg. I don't see a low-risk problem as a sound basis for this interface choice.

Wouldn't you have to recalibrate that a lot?
Nope. 2D position is absolute based on the camera, and depth is 'absolute' and doesn't drift over time.

Then again, the camera isn't stationary, is it, but keeps roaming around to keep the skeleton in view, which makes the positioning relative. Still, if they know where your hand is to power up a button as they do, then depth isn't a problem. Keep exactly their current code except instead of charging a timer by 2D presence, activate it on a minimum depth. If it's not as easy as that, there's something wrong with Kinect's libraries!
 
From all they showed so far their pose estimation is based on image recognition (using the depth camera, but still), and they also showed that for each frame they can generate an appropriate skeleton (ie: The skeleton is always snapped to the picture the camera is showing there's no drifting)... So what it seems is that large movements do not interfere with lag and that the faster the movement, the bigger the gap between your movement and the avatar's movement will be (in terms of distance, the lag in ms is probably constant all the way).

For gesture i'd guess that lag would be less perceptive... Is easier to spot lag when you are tracking 1:1 than when making a gesture, especially since you have to make the whole gesture before.

From the user's perspective, if he/she moves quickly...
You actually have lags at several levels above (from camera framerate, to skeleton processing, display latency, and gesture interpretation). The appropriate feedback will vary, from simple collision to final activation.

Gestures usually has a small delay after activation, or if you're unlucky, retries. Again the developers may be able to hide the gaps somewhat with predictive schemes or just distract the users.

Went to see the Kinect demo but the Macy people told me the demoes are for Thursday to Sunday only.
 
Back
Top