Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I wanna point my finger at the screen and yell 'bang' for firing. Finger tracking required.

The hand tracking will be suitable for that. Assuming you do the traditional "gun hand" type thing with thumb sticking straight up, index finger pointing forward and other fingers clenched.

For something like that, Kinect 2.0 will theoretically be able to track it just fine. Even without the thumb sticking up, it should still theoretically be able to do it.

Regards,
SB
 
Finger tracking can be done on the RGB (YUV :p) feed. The skeletal tracking will help direct the hand tracking, I imagine. and unless someone's really wanting a virtual air-piano game, is perfect finger tracking really needed? honestly? Open hand, closed hand, fist, spread fingers, cupped hands, pointing finger, and some gestures for interfacing. A completely virtual hand with 1:1 tracking of the player carefully picking up small beads and rolling them down their hand into a jar is just overkill. The most immersive sorts of games like Datura and Heavy Rain can be experienced very naturally with course gestures rather than finger tracking.

Ms has already experimented with finger tracking with current kinect on 360:

http://www.youtube.com/watch?v=99fWx1DzPY4
http://www.youtube.com/watch?v=TQb4Z8DBVCs
http://www.youtube.com/watch?v=Q-1HHaGexu4

The increased resolution and precision means that a better tracking can be done at longer distances from the camera, so I'm sure that the hardware should be able to deal with this if a developer want a virtual hand being tracked.
 
vgleaks said:
End-to-end system latency for Kinect is measured as the time from light hitting the sensor through to the display outputting an update based on that input
I guess this is based on 60fps, correct?

Does the added latency from TV displays have any sort of psychological effect on how the motion mapping is perceived (laggy/snappy/etc)? Does it even need to be accounted for? If so, does anyone know what they might assume? Is there some sort of industry standard hypothetical value used for generic TV latency? I guess, if we assume something like 60ms for TV display with the 60ms for kinect+xbox, it seems low enough that people won't notice for motion mapping (I imagine gestures will always be trouble). I vaguely recall reading a study about where the sweet spot for a natural feeling human perception of latency as something like 30ms to 280ms (my memory is failing me, so don't take it as gospel).
 
I wanna point my finger at the screen and yell 'bang' for firing. Finger tracking required.
I doubt finger tracking of that accuracy is possible with current consumer tech. At distance, a tiny difference in the index finger's direction translates to a large difference in screen position, and seeing only the front of the finger effectively makes determining that point extremely hard. You'd have to track sub-degree accuracy from the depth difference of front to back of finger over the few pixels it occupies in the camera. You'd need amazing accuracy. If the finger point is instead tied to arm direction, taking the targeting vector from forearm direction, then Kinect can do that.
 
I doubt finger tracking of that accuracy is possible with current consumer tech. At distance, a tiny difference in the index finger's direction translates to a large difference in screen position, and seeing only the front of the finger effectively makes determining that point extremely hard. You'd have to track sub-degree accuracy from the depth difference of front to back of finger over the few pixels it occupies in the camera. You'd need amazing accuracy. If the finger point is instead tied to arm direction, taking the targeting vector from forearm direction, then Kinect can do that.

Ya I was joking (mostly), hopefully in my lifetime (and I'm old :p).
 
Really? You think they can process faster? An excellent controller-based game can hope for button-press to action on screen of 3 frames, 50ms. This supposed kinect goes from motion to reaction on screen in ~60ms (4 frames). Considering it's probably a 30fps camera, you're talking 33ms to get the frame plus 2 frames. In other words, the processing time from receipt of action to stuff on screen is the same as a hard controller, 2 frames. What that says to me is processing capacity is not the bottleneck.

This is with processing the kinect2.0 that will be packed in the box...

If they had more processing headroom, they could handle 1080p 60fps or 720p 120fps.

Also it was never clarified if this is with full skeletal tracking or not and if so how many limbs/joints on how many users simultaneously?

Processing capacity of the existing kinect2.0 setup may not be "the bottleneck", but a lot of that has to do with how kinect 2.0 isn't pushing the envelope (as it should for a "killer app").
 
This is with processing the kinect2.0 that will be packed in the box...

If they had more processing headroom, they could handle 1080p 60fps or 720p 120fps.

Also it was never clarified if this is with full skeletal tracking or not and if so how many limbs/joints on how many users simultaneously?

Processing capacity of the existing kinect2.0 setup may not be "the bottleneck", but a lot of that has to do with how kinect 2.0 isn't pushing the envelope (as it should for a "killer app").

From the kotaku leak (which have some of the same info as the vgleaks one), it's now 25 joints per player (up from 20 on current kinect on 360), including a joint for the thumb and one grouping the rest of the fingers as one. (probably to detect hand gestures with full skeletal tracking)
 
From the kotaku leak (which have some of the same info as the vgleaks one), it's now 25 joints per player (up from 20 on current kinect on 360), including a joint for the thumb and one grouping the rest of the fingers as one. (probably to detect hand gestures with full skeletal tracking)

Yes but the latency quote doesn't cover if that is with full skeletal tracking on the maximum number of users, or just feeding the ir depth map.
 
Processing capacity of the existing kinect2.0 setup may not be "the bottleneck", but a lot of that has to do with how kinect 2.0 isn't pushing the envelope (as it should for a "killer app").

I think the biggest problems for Kinect 1.0 was that it couldn't get as much data to the console as quickly as one would have hoped and once it got to the console you had to divert processing resources for dealing with it. Plus you can't invest in major titles without having it in everyone's home, so economically that's a huge difference right there. I think all these areas are fixed for Durango by the looks of it.

As a result, I think everything is in place for a lot of really ambitious applications for Kinect 2.0 imho.
 
This is with processing the kinect2.0 that will be packed in the box...

If they had more processing headroom, they could handle 1080p 60fps or 720p 120fps.

Also it was never clarified if this is with full skeletal tracking or not and if so how many limbs/joints on how many users simultaneously?

Processing capacity of the existing kinect2.0 setup may not be "the bottleneck", but a lot of that has to do with how kinect 2.0 isn't pushing the envelope (as it should for a "killer app").
Aah, they're not "pushing the envelope". Is anyone else doing anything even remotely comparable in the space of 3D cameras? They do an almost fourfold increase in resolution, and according to them, even more, because the sensor is more accurate, and you're not happy because it's not a 48x improvement? In 3 years? You're not asking for much at all, are you? :) I'm guessing it's not processing power limiting it, but cost of the sensor.
 
Finger tracking can be done on the RGB (YUV :p) feed. The skeletal tracking will help direct the hand tracking, I imagine. and unless someone's really wanting a virtual air-piano game, is perfect finger tracking really needed? honestly? Open hand, closed hand, fist, spread fingers, cupped hands, pointing finger, and some gestures for interfacing. A completely virtual hand with 1:1 tracking of the player carefully picking up small beads and rolling them down their hand into a jar is just overkill. The most immersive sorts of games like Datura and Heavy Rain can be experienced very naturally with course gestures rather than finger tracking.

Would be awesome though, but only truly with tactile feedback. But I thought Kinect not being able to track wrist rotation was a very big downside, so if they can do that now that would be a huge plus already.
 
After the presentation of the competition is best for MS if it begins to push GPU and ram of the next xbox instead of the kinect 2;)
Probably not much they can do about that at this point. Right now their best bet is probably to double down on Kinect. The PS EyeEye will not easily be able to replicate it's functionality.
 
Probably not much they can do about that at this point. Right now their best bet is probably to double down on Kinect. The PS EyeEye will not easily be able to replicate it's functionality.

If they double down on Kinect, I expect them to unravel, lose their following and ultimately the sales figures they built up in the 360 era.

Should have went with the "secure the core, ease in the casuals" approach...
 
If they double down on Kinect, I expect them to unravel, lose their following and ultimately the sales figures they built up in the 360 era.

Should have went with the "secure the core, ease in the casuals" approach...

gotta wait and see what they come out with to see if they are in trouble.

The rumors seem legit but who actually knows with this stuff.
 
Status
Not open for further replies.
Back
Top