Sony's New Motion Controller

Another interview with Dr. Marks:
http://www.pcworld.com/article/1693...tions_about_playstation_3_motion_control.html

Game On: At what point did you settle on your current two-peripheral approach to motion-control? When did you say "this is it"?

Richard Marks: I’ll give you two very different answers to this.

The first answer is that we have been moving toward this solution for several years now. We learned a lot from our experience creating EyeToy, and also from other research we have done, and from the experiences we have observed for other products. We learned that while people definitely enjoy physical interaction and movement, they also want precise control and a simple, fast, reliable way to trigger actions. We designed our new control system to accomplish all of this. We believe the path we have chosen is an ideal combination of both spatial and action/button input, and of course we can combine that with voice and video data from the PlayStation Eye mic array and camera.

The second answer is much less complicated. The first time I pressed a button and saw a virtual light sword extend up out of the controller, and watched it move just as it should when I swung it, I thought “this is it”. Then, when I saw the reaction of my kids when they tried the same, I knew we had it right.

New controller with old games ?

RM: The new controller is designed to provide new and innovative gameplay. At E3, we showed both one and two-handed experiences. We are currently looking into the possibility of incorporating many familiar characters and franchises with these new experiences. More details will be provided when we make the official announcement of the product.

Credit where credit is due:

RM: Of course EyeToy came out before Wii, but that does not diminish the contribution Nintendo made to game interfaces. I’m a gamer first, so the way I see it both EyeToy and the Wii controller represent advancements that broadened the gaming market and enabled new experiences.

I think Nintendo did a superb job with Wii. They learned a lot from Wii too. Natal will bring something new to the table also.

RM: EyeToy was created to allow players to physically interact with games using their body. The unencumbered feeling of no wires and feeling free (instead of connected to your television) was very important, as was the simplicity of the controls. Everyone, even non-gamers, felt like they could just jump in and play, which was great.

We still believe that is the best interface for some experiences, but for other experiences, additional capabilities are important. We discovered during our research that some experiences demand precise control and a simple, fast, reliable way to trigger actions. We also found that some experiences just feel more natural when holding a tool, or a “prop”. Our new controller adds these new capabilities to those we already have from PlayStation Eye.

A rather intelligent interview. I don't want to quote the entire admittedly short article. Read it yourself :)
 
I re-read the interview. I think the basic problem is that the interviewer spoke his mind first and then ask Dr. Marks to expand on his view. Would be better if the interviewer ask open ended questions.

I went through similar thought process after playing with PSEye and Wii. There are pros and cons for both approaches. The controllers are only part of the solution. IMHO, the games are more important to put the tech to good use.
 
From the style, it seems like an email Q&A rather than interview. The interviewer asks questions already answered and doesn't follow any conversational path. The entire content boils down to

"Sony found through previous experience and research that the best interaction is obtained with full motion tracking and some form of tactile prop, accounting for the camera+controller solution. The new motion system is seen as a progression of technologies tracing back through Wiimote and EyeToy before it. Accuracy was a principle design criteria."
 
Ha, more EyeToy history:
http://playstation.joystiq.com/2009...tion-controller-plans-included-teletubbies-x/
(GDC 2001)

A look back to GDC 2001 reveals a presentation from Richard Marks about "using video input for games." Noting that "simpler interfaces are needed to reach a broader audience," Marks wanted to create an interface for casual non-gamers, one that would be "intuitive, simple, enabling and enjoyable." Sound familiar to you?

Some of the prototypes developed by the EyeToy team include "Misho the Witch" (pictured above), a virtual pet simulation that has players using a ball-and-stick controller to play with the on-screen witch. Ideas from this demo have ended up in both EyePet and the upcoming Motion Controller itself. Other ideas thrown around included a magic duel, where players could write spells using gestures, and games inspired by Casper the Friendly Ghost and the Teletubbies.

While gamers have been spared from a motion-sensing game based on the Teletubbies, another idea seems to have been left on the wayside: games based on various superhero properties. Marks' presentation revealed plans to recreate the powers of the Fantastic Four and the X-Men through the PlayStation camera.
 
One more patent on emotion recognition:
http://www.siliconera.com/2009/08/1...-laugh-detecting-emotional-tracking-software/

The application picks up on metadata, which includes laughter recorded by the microphone and a user’s expression from the camera. Both devices are linked to a “game console”, shown as a PlayStation 3 in the diagram, which identifies the user, notes emotions, and transfers the data over a network.

How will Sony identify emotions? The patent mentions identifying body gestures and tracking group interactions “such as when two individuals give each other a ‘High Five.’” Sony also developed smile detecting software for their Cyber Shot W120 camera which could be used too.

While the patent focuses on laugher it can identify other emotions such as sadness, excitement, anger, joy, interest, and boredom. For example, boredom may be detected if a user is “looking away from the presentation, yawning, or talking over the presentation.”

The software isn’t limited to video games. It can also be used for TV shows, films, and other media presentations

EDIT:
Similar R&D (Human activity recognition) for Cell here: http://emsys.denayer.wenk.be/T-cell/Presentaties/Austin/grauman_ibm_workshop.pdf

Source: http://emsys.denayer.wenk.be/?project=EmCel&page=cases&id=23
 
I honestly don't believe in this kind of... features? How can this be efficiently implemented in a game or be a good addition to the entertaining experience?

Ok, I smile and the PS3 detects that I'm smiling, so what? Which should be the response to that? What if I'm smiling because somebody in the room is telling me something and that has nothing to do with the game, TV show or movie?
 
I honestly don't believe in this kind of... features? How can this be efficiently implemented in a game or be a good addition to the entertaining experience?

Ok, I smile and the PS3 detects that I'm smiling, so what? Which should be the response to that? What if I'm smiling because somebody in the room is telling me something and that has nothing to do with the game, TV show or movie?

The same can be said about Nintendos Wii Vitality Sensor...

But I guess there are some uses still.
 
Ok, I smile and the PS3 detects that I'm smiling

You're right. It's not for every game. It's like voice recognition. *IF* it works accurately, then it may add more emotional or fun elements to the game.

A few months ago, my wife and I went to BestBuy and saw a camera with smile recognition. At first, it made me smile a little (coz I was surprised how well it worked). Then the camera "saw" my bigger smiles and snap even more pictures very quickly. It became a vicious cycle. My wife LOL'ed at me while the camera kept flashing away until I was dazed. Had to put the camera down to regain my vision. >_<

It's silly but it piqued people's interest.

so what? Which should be the response to that? What if I'm smiling because somebody in the room is telling me something and that has nothing to do with the game, TV show or movie?

Ultimately, it will depend on the game developers to design the game carefully. Giving you a special item while you're in a good mood helps to build relationship. It doesn't have to be smile/laughter. If you're sad while playing game, perhaps something can be done about it too (even if the game is not at fault).

The key lies in whether it's accurate first. If so, marketers and developers will be able to exercise their magic. My guess is the low hanging fruits will be award via Trophies, Home items and that "PS Thanks" program.

EDIT: I think a game like "Buzz!" is a suitable candidate for such technologies. The game can work the crowd by noting their reaction right after an answer is revealed.
 
patsu said:
You're right. It's not for every game. It's like voice recognition. *IF* it works accurately, then it may add more emotional or fun elements to the game.
Maybe, we'll see.

patsu said:
A few months ago, my wife and I went to BestBuy and saw a camera with smile recognition. At first, it made me smile a little (coz I was surprised how well it worked). Then the camera "saw" my bigger smiles and snap even more pictures very quickly. It became a vicious cycle. My wife LOL'ed at me while the camera kept flashing away until I was dazed. Had to put the camera down to regain my vision. >_<

It's silly but it piqued people's interest.
Aw, such a hilarious 'Mr. Bean' moment... :mrgreen:

patsu said:
Ultimately, it will depend on the game developers to design the game carefully. Giving you a special item while you're in a good mood helps to build relationship. It doesn't have to be smile/laughter. If you're sad while playing game, perhaps something can be done about it too (even if the game is not at fault).

The key lies in whether it's accurate first. If so, marketers and developers will be able to exercise their magic. My guess is the low hanging fruits will be award via Trophies, Home items and that "PS Thanks" program.
TheWretched said:
The same can be said about Nintendos Wii Vitality Sensor...

But I guess there are some uses still.
Of course, it depends on developers if they imagine creative ways to implement it in a game and make the player feel a funny connection between this feature and the gameplay, but I still believe that it will be a difficult task to achieve pleasurable results with it.
patsu said:
EDIT: I think a game like "Buzz!" is a suitable candidate for such technologies. The game can work the crowd by noting their reaction right after an answer is revealed.
Hm, it could be good, but you see, what else can it contribute?
Heck, it could be that I'm just too picky with this... :???:
 
Last edited by a moderator:
It would be a dream come through if Sony could apply the emotion and human activity recognition to media search. My entire family media on the home PS3. It would be extremely useful if they extend the "Photo Gallery'" app so that I can locate a photo/video using text input. e.g., "Lucas high 5"

The other complementary technique is the sketch recognition (for landscape pictures). It doesn't have to be 100% precise. As long as they can narrow down the search to 10 or so pictures/videos, it'd be useful for many.

eloyc said:
Hm, it could be good, but you see, what else can it contribute?
Heck, it could be that I'm just too picky with this... :???:

For Buzz! ? If the game senses that no one could get the answers right, and there are no laughter/noise, it may be time to switch out a quiz topic or change format (I had one game like this and it was awkward for the host). As mentioned above, giving out relevant "gifts" at the right time can engage people too.
 
I honestly don't believe in this kind of... features? How can this be efficiently implemented in a game or be a good addition to the entertaining experience?
I see it principally as a low-key ancillary advantage. eg. Adjusting the game difficulty when the player is clearly getting frustrated. It'd be nice if in FIFA when the ref has made yet another a moronic decision due to an AI glitch and we're shouting complaints at the screen, if there was a degree of adaptation behind the scenes that we weren't aware of (so couldn't exploit) which adjusted ref calls to be fairer (that should of course be extended to real life too!).

It should also work well in RPGs, where a player who empathises with the plight of a character in game has their game adjusted from one who doesn't care, as determined from facial response to cutscenes and dialogue. Yes, it could get messed up with background interference, like other input forms (camera). The game would have to filter to some extent (average of responses over time) and the player would have to be sensible too. If the game is supposed to be a moody RPG, listening to a funny podcast while playing isn't a sensible thing to do! As long as the game is labelled as such and people play it with the right expectations, it should work.

Of course, the likelihood of developers actually writing software to make something positive out of it as low.
 
The combination of voice recognition and face recognition is much more important than you may realise though. Every game that has AI-character vs player interaction is going to at least be able to greatly benefit from this technology. And much more than voice recognition, it's just about universal. Sure there are nuances here and there though - recent research for instance shows that Japanese people look more at someone's eyes than westerners, who tend to look at the whole face. This sometimes makes them have trouble distinguishing anger from amazement in Westerners for instance. But on the whole, emotions are fairly universal.

I think reactions to emotions from the player may well be a more important step towards immersion and realism than higher numbers of polygons. ;)

And yes, of course it's going to take a fair while before developers make use of it ... but it can be done now, so it will be used (we've seen Milo). How it goes from there depends largely on the success of the pioneers.
 
And yes, of course it's going to take a fair while before developers make use of it ... but it can be done now, so it will be used (we've seen Milo). How it goes from there depends largely on the success of the pioneers.
At the beginning of this generation we spelt out several great uses for sixaxis motion, none of which really came to anything.

What are the realistic applications for emotion reading that developers are actually going to use?
 
I see it principally as a low-key ancillary advantage. eg. Adjusting the game difficulty when the player is clearly getting frustrated. It'd be nice if in FIFA when the ref has made yet another a moronic decision due to an AI glitch and we're shouting complaints at the screen, if there was a degree of adaptation behind the scenes that we weren't aware of (so couldn't exploit) which adjusted ref calls to be fairer (that should of course be extended to real life too!).

Perhaps even more realistically it would then have the crowd react in a similar way to how you are reacting. All the while the refs continue ignoring it just in real life. :D If you cheer the crowd cheers. If you boo, the crowd boos.

Perhaps even to the point of rioting in the stands. :)

Regards,
SB
 
At the beginning of this generation we spelt out several great uses for sixaxis motion, none of which really came to anything.

What are the realistic applications for emotion reading that developers are actually going to use?

Ah, but I think that's partially because the SIXAXIS is only half developed. It has the technical system in-place, but not the usability (controller shape) and marketing.

If Sony can perfect it in its own "Photo Gallery" and other games, I think more developers will visualize the technology working in their own products.


Besides marketing (relationship building) and difficulty adjustment, I do see more use in kids games (e.g., EyePet) and party games. The question is whether it's accurate enough.
 
I remain sceptical. EyeToy presented some great implementations for developers to follow but they shunned it. Of course the tide of development is turning now and perhaps the next New Thing will garner a bit more attention.

Still, regardless what happens, I'd be interested to hear what B3Ders think are the realistic possibilities that might actually find their way into the games we'll be playing.
 
I remain sceptical. EyeToy presented some great implementations for developers to follow but they shunned it. Of course the tide of development is turning now and perhaps the next New Thing will garner a bit more attention.

Yes, Nintendo turned the tide single handedly. Microsoft's marketing dollar will help boost awareness and desirability for the entire segment further. It's up to Sony to capitalize and stake a claim here.

If it took so long and several iterations for user generated content to gain some traction in the console space (LittleBigPlanet), then it will likely take a long time for new technologies like this to mature. But the cycle has started a few years back.
 
Back
Top