Was that the projected pattern solution, and the TOF camera was just shored up to prevent its use by other parties?
Was that the projected pattern solution, and the TOF camera was just shored up to prevent its use by other parties?
Sorry, guys, I find interesting the latest discussion on this but I would like to read your opinions on the questions in my previous post, too. If those questions were discussed before, please provide a link.
Thank you.
Intuitively, deducting poses from 2D data should be very doable, applying some amount of assumptions and context. We can easily recognize the way people stand with one eye closed, or indeed looking at a 2D photo. Human vision is complex, but it shows that source data with depth removed is still sufficient.Sorry, guys, I find interesting the latest discussion on this but I would like to read your opinions on the questions in my previous post, too. If those questions were discussed before, please provide a link.
Thank you.
Sorry, guys, I find interesting the latest discussion on this but I would like to read your opinions on the questions in my previous post, too. If those questions were discussed before, please provide a link.
That's just one game. It'll be on a case-by-case basis depending on the vision of the developer. Because the game has you doing complex pro moves, it's going to have to be gesture-based.That motion fighter bit confirms my suspicions. Full body tracking is not viable since most gamers do not have the muscle coordination of athletes or superhuman game protagonists. I mean if I record myself jumping and climbing, I'm not going to look as elegant as Drake, by a long shot They'll all have to resort to some sort of gesture system.
However with Sony and the move its use requires at minimum between 3 and 5 accessories for a single person.
How can Sony reconcile the use of Move whilst maintaining their current controller layout for regular games which require it and without breaking the bank in terms of overall expense?
Thank you for your reply.I'm no expert, but I don't see why not...other than the lighting conditions and potential overhead...in fact I'm sure before eyetoy first came out I saw similar demos at ECTS
I see your point.Intuitively, deducting poses from 2D data should be very doable, applying some amount of assumptions and context. We can easily recognize the way people stand with one eye closed, or indeed looking at a 2D photo. Human vision is complex, but it shows that source data with depth removed is still sufficient.
For posterity, if you can recognize the face, you can infer the front of the body because people generally can't turn their heads more than 70~80°. If you assume symmetry of the torso, you can infer the angle (and some relative depth information). You can assume symmetry of arms, joint constraints and constant lengths to "train" your system to an individual. As the arms appear thinner or thicker, longer or shorter, you can infer changes in depth and angles etc.
One could argue that depth info makes the processing easier, as it resolves ambiguities. I don't know how much of a factor that really is though. Some ambiguities can already be solved with assumptions about the joint structure (a knee will never tilt forward and such). Background separation is maybe the big thing here, but it was never demonstrated if the tech actually does that well -- the demo stages were always wide and empty; demo setups were all sideways, far away from the next "back" wall.
Thank you.I have seen some work in 2D skeleton tracking. I think you may be able to find them on youtube.
Here's a paper on a possible approach:
http://dircweb.king.ac.uk/papers/Martínez del Rincón09_32358224/bmvc_abstract.pdf
(from Google search result. Not sure if applicable to gaming)
EDIT:
On a related note, a new studio has been formed to do motion games for the consoles:
http://www.joystiq.com/2010/04/15/side-kick-founded-to-work-on-motion-based-games-for-next-gen-ga/
No. It's not a 3D spacial/volumetric capture (which is 3D scanning a la medical and model-creation), but it is capturing 3 positional dimension values. Sterescopic depth perception isn't the only way to determine distance to an object, but it is the natural way so gains an unfair reputation.NATAL's hardware only provides a depth map, and I dare to say that those images are still 2d material. If it featured 2 cameras, then we could speak about real 3d input.
You need a PDF viewer to read the paper.
Maybe here it could be useful the preliminar callibration which I talked about. Once the system has recognized the length of a thig, for instance, any further changes in that length should be interpreted as rotation. Other parameters, such as the position of a foot, can help to interpret if the rotation is forward or backwards.Skeletal tracking on a 2d camera is possible but will only be skeletal tracking in 2d still. A 2d camera cant tell if a leg is moved forward, to the camera it hasnt moved at all.
Yes, I remember this demo and your explanation is correct. Thank you.ShadowRunner said:Your simply not going to want to do it on a 2d camera alone much better of using face tracking(which can provide depth info from distance between two eyes) and the two wands(also depth info), as in the puppetry demo and then us 2d skeletal tracking to fill in the gaps.
Yes, I see the difference.No. It's not a 3D spacial/volumetric capture (which is 3D scanning a la medical and model-creation), but it is capturing 3 positional dimension values. Sterescopic depth perception isn't the only way to determine distance to an object, but it is the natural way so gains an unfair reputation.
Another example of a difficult position/rotation to track (for a standard camera) is when a leg is rotated forward/backwards, isn't it? Is the same example that I said above in response to ShadowRunner. Do you consider that the possible solutions that I pointed are likely to be considered?As for depth camera capabilities, put your hand flat upon your chest. Now move it a few cms forward and twist it ever so slightly to one side. The size and shape of the hand on a 2D image will be very hard to track and determine that the hand has moved forwards, whereas a depth camera will have exactly that info. [...]
Then, you don't "believe in NATAL"? Sorry if I didn't understand.Though the theory of image-based human skeleton tracking is sound, I consider it highly implausible a solution could be found in the consumer space. Such technology ought to first appear in movies, where real actor tracking without blue-screening and point markers for mo-cap would be a huge advance. If it's not being used there yet, I doubt it'll appear first in a games console.