PlayStation Move technology thread

As I said, I'm almost 100% sure they'd use a dual camera system so they can at least get stereo vision. Just look at a game like Start-the-Party or EyePet. If you play that in 3D, and then it can also show a videofeed of yourself in 3D at the same time, you almost have the complete interactive virtual world experience - or if you don't include collisions, you can at the very least make it look that way. ;)

They could also in that case build in an option to filter only infra-red for instance (either through hardware or software) to help with depth perception/bad lighting conditions, but newer lenses may be good enough by that time to do with just two proper 'regular' cameras.

Sony has a 1 lens 3D technology that they use in their stereo cameras, they even have a full body motion controller like Kinect called ICU maybe they will use that for next gen.
 
I don't know I only tried it with my hands, but that is the same problem with Kinect because some materials don't reflect IR light.
Kinect isn't dependent on the amount of light reflected, but the distortions of a known pattern, which, although affected by different amounts of absorption, are a far more robust solution to a straight intensity evaluation. If sophisticated software could be developed to map and track areas of different intensity, eg. calibrating to a standard pose, with fast enough refresh it could work, but would likely be prone to contrast issues like EyeToy. This is where project patterns and pulsed light come in, which are probably covered completely by patents.
 
because I did this with just the Playstation Eye tracking my hand by seeing the refections of the light coming from my laptop screen

Yeah, Dr. Marks mentioned (in one of the interviews) that Sony has filed for this patent while working on PSEye. He discovered it accidentally when he saw his own video on a TV screen even though all the office lights were turned off. Turns out that the PSEye is sensitive enough to see him under low light condition (i.e., illuminated by light from the TV screen).

I presume you need to be quite close to the box, may be like a laptop ?
 
Kinect isn't dependent on the amount of light reflected, but the distortions of a known pattern, which, although affected by different amounts of absorption, are a far more robust solution to a straight intensity evaluation. If sophisticated software could be developed to map and track areas of different intensity, eg. calibrating to a standard pose, with fast enough refresh it could work, but would likely be prone to contrast issues like EyeToy. This is where project patterns and pulsed light come in, which are probably covered completely by patents.





they can track the depth by size like they do with the Move and face tracking and the Playstation Eye already work in the dark only using the light from the TV.





Edit: I said kinect will have the same problem because some materials don't reflect IR light. and if it's black the system will just see it as part of the background.
 
Last edited by a moderator:
they can track the depth by size like they do with the Move and face tracking and the Playstation Eye already work in the dark only using the light from the TV.
They're tracking the face there, which provides several points of contrast always present against a predicatable background - eyes and mouth contrasting with skin tone. Full body tracking a la Kinect would require 'seeing' the torso and limbs. A pale top against a background throws up issues. EyeToy failed to detect my hand contrasted against a pale-blue wall in bright conditions as there was not enough contrast to register the pixels as different within the threshold needed to accomodate noise. If PSEye were to read my hand as forwards, if I then move it to one side and similarly coloured pixels are behind, how does it know that the pixels occupying that position are background and not hand moved back? Given human physiology, it would be possible to determine what's happened as clearly the hand can't move back 3 metres to where the wall is in one frame, but you'd need some funky code-voodoo to pull of a robust full-body tracking system achieved optically from a single camera. The best I know of so far is an 8 camera motion capture system. There's a YouTube vid of a 3 camera system on PS3 Linux, but no skeleton to see how effective it actually is.
 
ShadowRunner said:
The sub-millimetre accuracy is only in reference to the x/y positioning i believe. z positioning is accurate to a couple of centremeters i think. The ultrasonic sensor was for detecting the z depth, and would have likely been much more accurate. Most likely they felt the visual method was suffiecient in terms of accuracy and scrapped the ultrasonic sensor to reduce cost and complexity. No point in having submillimetre z accuracy if the end result doesnt effect the game much. x/y accuracy is much more important.

Okay that makes sense. I posted some back of the envelope calculations in the old thread stating I didn't think it was possible to get sub-millimetre depth accuracy without u/s, based on the camera resolution and distance between camera and controller.

I was wondering how they were doing it as I didn't think it was possible, if it turns out they aren't then it all makes sense!
 
They're tracking the face there, which provides several points of contrast always present against a predicatable background - eyes and mouth contrasting with skin tone. Full body tracking a la Kinect would require 'seeing' the torso and limbs. A pale top against a background throws up issues. EyeToy failed to detect my hand contrasted against a pale-blue wall in bright conditions as there was not enough contrast to register the pixels as different within the threshold needed to accomodate noise. If PSEye were to read my hand as forwards, if I then move it to one side and similarly coloured pixels are behind, how does it know that the pixels occupying that position are background and not hand moved back? Given human physiology, it would be possible to determine what's happened as clearly the hand can't move back 3 metres to where the wall is in one frame, but you'd need some funky code-voodoo to pull of a robust full-body tracking system achieved optically from a single camera. The best I know of so far is an 8 camera motion capture system. There's a YouTube vid of a 3 camera system on PS3 Linux, but no skeleton to see how effective it actually is.

this is why I said if they used a IR emitter & a lense cap that only let the IR in so it would only be seeing the IR light that's reflecting back from your body.

and even without that they can use dynamic background extraction so it can know your hands from something that's in the background.

I never said it would be perfect I just asked could it be done.

edit:


see the part at about the 2 minute mark when he was reaching his hands into the scene I'm talking about if they used a IR emitter to do that on a bigger scale.
 
Last edited by a moderator:
and even without that they can use dynamic background extraction so it can know your hands from something that's in the background.
Background removal is tricky, as Sony themselves said. There's a good example in Kung Fu live (starts 2:30) that shows the blobbyness of background removal. Very similar to the point-cloud silhouettes of Kinect.

I never said it would be perfect I just asked could it be done.
Without a projected pattern, recreating Kinect in PSEye with a filter cap as you suggest won't work IMO. Light intensity alone won't cut it, which is why every company doing this either has multiple cameras or some projected pattern.
 
Thanks, great interview! It was a lot of work but the information is so dense in that interview and one of the best I've seen so far, so I wrote it out (hopefully noone else did that yet in English or I'll feel silly, it took me an hour ;) ).

RichardMarks said:
Actually the Playstation Move comes from the technology we've been developing for ten yours from Eyetoy and other things we were doing with Eyetoy that never got productized until now. And I know that some people think of that I mean I think that is mostly because it’s a one handed controller that they make that connection but it's got a very different set of capabilities than the Wii controller. I know there are some similarities but there's many differences as well. The biggest difference is that our tracking system involves a camera so it sees the person and it also sees where the controller is so it knows very much about the spatial information.

I think one of the things that happened on the Wii is a lot of people didn't feel they could do some of the things they wanted to be able to do with the controller. Our goal on our side is a little bit different - we've always had the goal in my work on user interfaces: two things: be able to bring experiences to a wider audience and also to enhance experiences even for the existing audience. SO that's really what we're trying to do with the Move is make it so that people who already love Playstation 3 can have a new set of experiences that they'll also like and also people who are maybe intimidated a little bit by the control scheme and things to have a simple to use control scheme. So we're balancing effectiveness of a controller with simplicity.

That’s exactly what you're saying happened with the Wii is very much the same thing what happened with the Eyetoy actually. Eyetoy was very popular for a casual, social experience, but when the game developers tried to make something deeper was very difficult. Just the precision and what you could do with it was limited fundamentally. Just having the only camera kind of hit a wall what was possible. So with Playstation Move having so much more precision and fidelity I think that the core games will actually be able to leverage it a lot better than anything we've seen before.

Right now all the games that we're launching at launch you can play with one controller. A few of the games are two player games and then you'd want to have a controller for each person just like you do on any system, and if you have two controllers because you ever wanna play two players, that means you have the ability to play a one player two controller experience already. So I think you'll see in a lot of homes there will be two because you want to be able to play two player with somebody, and then you can also play a single player game that uses a more capable two controller control system.

No it's a limit of a total of four Move motion controllers. So two each or four players with one.

No there's actually a limit of three and three [for Navigation controllers + Move controllers]. There is a fundamental limit of attaching 7 devices that can be connected, just like there's seven Dualshocks, Each one is a separate device. Yeah because the Navigation controller is wireless and has its own connection you don't have to plug it into the Motion controller that's why there is a fundamental limit of seven devices.

We've done several different things with that, You can track the head for somebody and get some extra spatial tracking that way, other schemes involve you use the Move for one literal controller and the other Move is used for kind of like an analog stick actually you can do that, You can do, there's lots of research into the academic community, there's things called 'clutching' which is when you push a button, suddenly now that is your motions instead so yeah it's literally your arm until you push a button and now it becomes your move-around-the-world kind of thing. So there is only so many things that people can do with one. Actually if you'd ever try to do it it is quite tricky if you try to put an analog stick on here and to be moving it and move the analog stick it is very hard to do.

People seem to have a limit kind of about two sets of spatial input they can do at a time so you can already do that with two of these [Move controllers] so

Yeah our we did a lot of 3D camera research before we ended up choosing not to pursue that as a product. I like the 3D camera technology but it's a lot like what we had before with Eyetoy it gives you that spatial input of the body but it doesn't give you the fidelity, precision, and it doesn't give you that ability to do some of the kinda core things that people expect to be able to do like click on a button so we ended up choosing just not to productize something like that because it just didn't do all the things we wanted to be able to do on our platform. I don't want to say anything negative I mean its neat to see new things happening in the interface.

It just enables a lot more experiences. WE know that some of our core people really want to be able to do some of those experiences. I think there is a misconception that a controller like this is only for casual and nobody else will ever want it and its only motion swinging your body all about and that's all it is, but more and more we are starting to be able to communicate that's not all it is, you can use it like a virtual reality controller really reach into a scene and manipulate things, change things, you can use it for real time strategy games, there are so many other things you can use this for than just motion. Motion is great, that's one great set of experiences but it's not the only set.

Right and you have to unlearn that a little bit. But actually its funny because the Dualshock is a set of buttons and analog sticks. What game is that? It could be any game, right? you can make any game on top of that. Similarly with this that is exactly what we want to see its you can make any kind of experience on top of this. It's a set of data, the data comes out, it works, now choose what you want to do with it. You can make an action adventure game, you can make a shooter game, you can make a party game, sports games whatever.

There's a lot of different possibilities with how to use the camera and markers and sensors inside of it we're looking in the future what makes sense for the Playstation 3 I’m pretty sure this will be the controller for a long number of years. This gives us very good tracking and this responsiveness that's matched to human capability, so you can move as fast as you want and it's really gonna be able to track that. In the future we'll look at other ways to change things and refine them but this is our controller now.

That's a difficult to speculate about. WE haven't launched this yet so we're definitely are gonna take a lot of feedback from what people who play and what developers have feedback about it and incorporate that into what we'll make in the future but I think that the Dualshock and the move offer two fundamentally different sets of possibilities of experience so yeah I guess it'll be interesting to see how that works out in the future.

Only if you wanted to send the video, the video is heavy you can do it it'd just take a lot of bandwidth of your system but to send the data is exactly like Dualshock otherwise you're sending the position, the rotation data so.

The Move uses less than 1 percent of the Playstation's memory and it uses a portion of one of the SPUs so there's seven SPUs and we use up a small portion of one of them. It's not very heavy, you can take the MOve and add it to a triple-A game and you know those teams try to use up as much of the system as they can but there's enough room for us to slip in there.

Our group recently got access to working with 3D too because we've been so focused with the Move we haven't had any time but we just started about a week ago we put all our demos into 3D and like that modeling demo is just so much easier with the 3D display because you know exactly where it is and you get this really strong effect being able to reach into a 3D scene and grab things and pick them up so the Move is like the perfect timing it turns out for 3D because 3D is the spatial display of 3D and now we have the 3D input as well at the same time so we can do a really cool set of stuff. But you know we don't want to send the wrong message you don't have to have a 3D TV to use move, Some people will be confused so we're not emphasizing it too much yet they are separate but they can be used together really well.
 
I can't see any of the above images and video, onQ. Am on an iPad. YouTube videos should work.

EDIT:
Forgot the China government blocked YouTube.
 
There are other interviews kicking around with Dr Marks and Anton that are interesting, although not necessarily very technical. (the hiphopgamer interview is funny even for me, nowgamer (I think) have a more 'financial/strategic/less graphically interesting' interview).

Looking at the GT racing wheel, does anyone else think that maybe those recent racers could be patched for move support? (I think they were very high budget, and a 'partial-re-release' along with sony marketing at a better time of the year might reduce their losses?)
 
For the wheel attachment, wouldn't the eye lose track of the ball when the wheel is turned? If they put the trackball at the top instead of the side, at least you could turn a decent amount without it getting lost behind the base, but the way they have it now, wouldn't it lose track for left turns? Oh well... I love my DFGT.
 
Last edited by a moderator:
For the wheel attachment, wouldn't the eye lose track of the ball when the wheel is turned? If they put the trackball at the top instead of the side, at least you could turn a decent amount without it getting lost behind the base, but the way they have it now, wouldn't it lose track for left turns? Oh well... I love my DFGT.

The eye tracks the glowing ball, and finds the linear position of the Move controller. "you moved up 8 pixels, left 7.2 and back 1cm".

For a steering wheel, you want the rotational position of the controller - provided by the gyroscope/magnetometer inside the Move.

i.e. you could unplug the eye and technically the move would still function as a steering wheel to the same of degree accuracy.
 
Back
Top