Sony's New Motion Controller

Found more: http://www.gamasutra.com/php-bin/news_index.php?story=24456

The reveal was part of Hirani’s promise that developers wouldn’t need to develop their own tech to work with the new controllers. "If you are working with the Playstation Eye and think there is some new tech you’re going to have to develop for the motion controllers, just get in touch with us," he said.

"We have a wealth of libraries available, and the chances are you won’t have to develop any technology yourself."

Hirani also implied that libraries will include skeleton tracking, and expanded on the interaction between the camera and the motion controllers.


Great breadth of technologies. They need to work hard to make the tech less sensitive to lighting condition though.
 
PS3 Motion Control

Rather interesting Edge Interview: http://www.edge-online.com/features/interview-ps3-motion-control

Is Sony on track to meet the spring 2010 release date announced at E3?
PH: I think the main thing is that there are several elements to a big release. You need to have the hardware side of things, and we’re in that phase of refining the hardware. We’re working with developer teams and with each iteration we ask for their feedback, so that’s why we’re hesitant about having the prototypes we have filmed because they will probably change because developers will say, ‘Actually, let’s have this button configuration’, so we’re working out what the optimal number of buttons is and where they should be… But on the hardware side of things, the manufacturing side of things, that’s our forte as a company and we’re on track for spring, so mainly it’s about ensuring there’s a good line of software to support that.

Sony has said that all genres of games will be compatible with PS3’s motion sensing controller. Is the system being designed with core and casual gamers equally in mind?
PH: All we can speak of is the experience we have with the game development community and we are seeing a spectrum of games. [Developers] are experimenting with it and looking to see what they can add to their games. We’ve got a lot of building blocks in the SDKs and it’s almost like a play box [in] that designers have to think about what they’re going to do and how it’s going to work for their game. Anything’s possible really. We’ve tried to make that 'anything is possible' as easy as possible. One of the things we pointed out is that you’ve got the motion control and you’ve also got the PlayStation Eye, and we have a lot of libraries in this space already to do with facial recognition, spatial recognition, gesture recognition and all of these little elements. All we can provide are these Lego blocks. At the end of the day game designers and developers have to think about how they put them together and how they make something that’s unique and compelling.
 
Skeleton tracking and facial recognition? Can they work efficiently and effectively with the PSEye?

I mean, for the 360 to use these features it demands a device with its own processing unit and memory. The PSEye is just a camera
 
The PS Eye is a camera + Cell. :)

The skeleton tracking is likely an approximation (based on 2D perception). The facial recognition can be very quick depending on what they want to track. There was a recent video on head tracking using regular CPU, it's almost instantaneous. In general, Sony mentioned that it may take more Cell power to track motion.

EDIT: This is not totally related, but even Blu-ray is now tapping on "Augmented Reality" as a buzzword: http://www.blu-ray.com/news/?id=3130

apart from other usual features, it will include a very novel one called “augmented reality,” allowing buyers to tour the Enterprise and even shoot enemies from the ship's deck, after holding the disc packaging in front of a webcam.

In general, I think BDA and Sony should put BD-Live to good use. Since there is a large number of PS3 out there, Sony can deliver an extended Blu-ray feature set for players as powerful as PS3.
 
Skeleton tracking and facial recognition? Can they work efficiently and effectively with the PSEye?

As Pat says, Cell is powerful. However, I think they say they do skeleton tracking using the camera together with the motions sensors?

I mean, for the 360 to use these features it demands a device with its own processing unit and memory. The PSEye is just a camera

It's not necessarily that the 360 isn't powerful enough though - they also wanted to make Natal available to existing games 'as is' - e.g. basically Forza 3 or Burnout Paradise could get Natal support patched in, because Natal gives no additional overhead - it behaves exactly like any other controller in that respect, which is good.

For the PS3, some of the more demanding features may mean you have to sacrifice some memory and SPE power. To what extent this would be beyond the current 7th SPE already reserved for the OS and the 6th that can already be called upon by the OS, I don't know.
 
Skeleton tracking and facial recognition? Can they work efficiently and effectively with the PSEye?
I think it's in association with the motion tracking hardware that manages 3D perception of the hardware. This will position the arms which can be mapped to a skeleton. I question that it'll be hugely versatile, but it may well cover 90% of desired uses, with perhaps a little adjustment of a game design to fit the control scheme. So "good enough".

Edit : Assuming it works. I'm thoroughly fed up of promises and expectations. There's little honesty in this world.
 
Had a short exchange with Pana on GAF. I didn't think that the new controller has a gyro. According to the the Edge article above:

PH: I think we can just say that it’s very, very precise. People are going to be able to take games in this space forward because of the precision aspects.

KH: The classic example I give to people is that the most precise thing you can do is write your name using a [piece of] chalk on a blackboard. Try doing that with a mouse and it’s bloody difficult.

PH: The core elements are fairly straightforward. As I say they date back to the early ‘90s. It’s just having a really nice camera which works well with the PS3, which has the processing power to do some interesting things, and then combining that with the LEDs in the globes. The camera’s basically looking at where these globes are and what colour they are and doing things on that basis. Combine that with some gyro stuff we have in there and it can do the tracking and work out where things are going.

So it seems that the full set of technologies are:
PS Eye camera + Wand (Acceleromoter + gyro + colored LED ball) + Cell software

The acceleromoter is mentioned in: http://www.develop-online.net/news/32415/Sony-motion-controller-is-true-interaction

I wonder what the final cost of the controller will be. With such a heavy investment in technology, they'd need to invest more in marketing too (e.g., Lining up and pre-announcing developers in the public). Otherwise, they'd face the same traction problems as before.
 
As Pat says, Cell is powerful. However, I think they say they do skeleton tracking using the camera together with the motions sensors?

I think it's in association with the motion tracking hardware that manages 3D perception of the hardware. This will position the arms which can be mapped to a skeleton. I question that it'll be hugely versatile, but it may well cover 90% of desired uses, with perhaps a little adjustment of a game design to fit the control scheme. So "good enough".

Edit : Assuming it works. I'm thoroughly fed up of promises and expectations. There's little honesty in this world.
Well I was thinking about the application in normal demanding games.


MS saw the need of e separate processing unit and memory to achieve this without any hit on the console's performance.

So I believe they need extra memory and performance power to achieve this with any game on the PS3 that tries to reach a high level of quality (such as the level of Uncharted and Killzone).

Otherwise the implementation of skeletal/facial/voice recognition could be atrocious or they would sacrifice some performance on the visuals and physics of the game.
 
Well I was thinking about the application in normal demanding games.


MS saw the need of e separate processing unit and memory to achieve this without any hit on the console's performance.

So I believe they need extra memory and performance power to achieve this with any game on the PS3 that tries to reach a high level of quality (such as the level of Uncharted and Killzone).

Otherwise the implementation of skeletal/facial/voice recognition could be atrocious or they would sacrifice some performance on the visuals and physics of the game.

Well sorta. The difference is more like this.

MS does the processing on the Natal unit so that the console is free to continue as it always has been. Thus developers don't have to adjust their engines to account for the additional overhead of motion tracking, skeletal tracking, depth tracking, facial recognition, voice recognition, etc.

So as far as the devs are concerned it's just another controller. So no adjustments have to made in the design process or engines that are used.

Sony will leverage the SPE's but in doing so, developer's will have to take into consideration any extra overhead that is brought about by the PS3 doing some of the things that the Natal unit takes care of. Thus it may not be possible to use it with something like KZ2 that uses most of the processing power (I think?) of the PS3.

So for devs they just have to design a game with the overhead for the system in question. So in that sense it isn't a limitation on visuals and performance per se, as they would design a game with that in mind.

So in absolute terms, yes it is a bit more limiting than MS's approach. But in relative terms, the games will be designed with the limitations in mind such that people playing the games shouldn't notice the limitations.

About the only sticking point would be trying to achieve feature parity in cross platform motion sensing games.

Regards,
SB
 
I find it hard to belive that the game will see natal just as an controller.

The feedback to the game from the controller will have to be a lot more than left,right,a,b,x,y.

from what i have seen in the e3 demos there has to be more overhead/feedback than a regular controller to accomplish that.
 
Yap ! Things that rely on hand/controller motion sensing should be no different from SIXAXIS today. Both have an acceleromoter and gyro. So I expect the impact to be minimal or none. I am guessing the system probably doesn't have to do absolute positioning (distance sensing) all the time since they can calculate from deltas using the built-in sensors. The PS Eye color LED-based distance sensing can be used to correct the "absolute" controller position on-demand (or over a longer period).

The more resource consuming ones should be color, sketch, face, expression, voice, skeleton and full body motion recognition. Some are already implemented (or being implemented) in full PS Eye games today. e.g., Eye of Judgment, EyePet, SingStar. The input would naturally be different from controller motion data. Not sure if the reserved OS memory will help in a few of these cases.

I remember Sony mentioned that full body motion is the one that will take up the most resources.


Eventually, if they want a totally separate unit for natural interface controls, SPURSEngine would be a good option to explore. All the existing SPU algorithms should be reusable there, and it's 30% smaller than a hypothetical 4-SPU part. Toshiba can use the tech for their Regza TV and high end laptops too.
 
I find it hard to belive that the game will see natal just as an controller.

The feedback to the game from the controller will have to be a lot more than left,right,a,b,x,y.

from what i have seen in the e3 demos there has to be more overhead/feedback than a regular controller to accomplish that.

Yes, you'll get more data, however, there's no processing involved or resources consumed on the X360 itself.

So none of the X360 CPU/Memory resources will be used in order to process image information, track skeletal points and motion, compute voice recognition, track multiple people, etc...

The interesting thing is that there's more noise coming from MS recently about Natal coming to PC. If it does, then people will be able to get their hands on it and see just what data you get from the unit.

Regards,
SB
 
UI's usually trigger/event based. For high level data, the main program would be alerted of state changes and if it needs more details, it could probably call something to get them. For low level data (especially when implementing new ways to interpret the raw input), the main program will likely need to poll the latest state continuously. CPU resources will be consumed.

In one of his earlier presentations, Dr. Marks was able to fit an early prototype (libvision) into 1 SPU without DMA. Don't know how far it has evolved.
 
Yes, you'll get more data, however, there's no processing involved or resources consumed on the X360 itself.

So none of the X360 CPU/Memory resources will be used in order to process image information, track skeletal points and motion, compute voice recognition, track multiple people, etc...

Do you have a link handy?
Skeleton detection on camera is one thing, something like gesture (or voice) recognition is another.

The way you describe it sounds really inflexible or they will have to let host application upload code to camera. Even then, I don't see reasonably cheap and efficient solution around memory limitations.
 
Eurogamer has an interview about Sony motion controller
(I'm currently reading it)
EDIT
OK I've read it, nothing really new but the article tone is refreshing gaming press should give up on Sony CEx interviews and focus on engineers they are a more interesting bunch :)
 
Last edited by a moderator:
Some more details in liolio's Eurogamer interview.

Face recognition came from their camera division:
Paul Holman: We haven't made developers try to learn about this new technology, struggle with it and try to make something work. We've got the libraries and we've been able to leverage work in other parts of Sony. So in the camera space, they've been doing a lot of work on facial recognition for still cameras. We've got more processing power so we can actually put it in more easily. That's where you see what people are doing in front of the camera and not just their face, but also the way their body moves and their hand gestures.

The camera smile recognition worked really well when I tried it (it's instant). I wonder how well the Cell can run it via pure software.


About the mic array...
Kish Hirani: The mic is equally important. It can distinguish where you're sitting in a room, so the four of us could be sitting here and the mic could tell who's talking from where. That [technology] has previously been available, but designing your game becomes like cherry picking - grabbing what you want from these new technologies.


About the color LED:
Paul Holman: You can programmatically set the colour as well. It's RGB, so there's the full spectrum of colour. And you can use up to four of these things at the same time, depending on your game design, so it's quite interesting.

Kish Hirani: I'm not a games designer, but if I was I might use it as a muzzle flash if you're using a gun, or use it as a paintbrush you can use to dip in and pick up different colours... There's always room for wizards...

I think someone should explore outdoor, screen-less group games for PSP. Something simple, like a toy. Do a make-up game for girls ^_^


About button configuration:
Eurogamer: The same number as currently are on the DualShock, or are you planning to simplify things and strip it down?

Paul Holman: I don't think it's fixed, as such. One of the aspects of the way we work is we tend to bring out prototypes early on and work with game teams - our own studios, external studios - and get their feedback. We're really in that phase where we're working with people to see what they want, so that when we have a final product it's actually going to meet the game designers' needs.


Comparison with mouse:
Kish Hirani: What I personally love is to be able to write your name. Grab a mouse and write your name; it's difficult. The mouse uses very old motion-tracking technology, and to be able to write your name on the screen - that's the precision you're getting. You've physically got a chalk in your hand, you're in front of the blackboard and you're writing. That's the level of precision involved.

I think one of the best solution is: Use the new controller combined with the mouse's usage model (moving stuff and writing on a 2D surface).
 
Comparison with mouse:
His point is incorrect though. There's nothing bad about a mouse's accuracy - just ask all the PC FPSers who hate console controllers! The problem with the mouse is not old moving abll technology (and who uses that these days?) but learning new motor skills. Those same skills were learned to handled a pen and write your name, but a long time ago and at school and people have forgotten how much effort it took. Thus PS3 motion is no more accurate than a mouse (in fact it's far less accurate, as the mouse is accurate to many hundreds of points per inch, tiny movements). It's just easier to use for most people.
 
I agree ! I prefer mouse interaction personally. I too think that modern mouse tracking technologies are more accurate/sensitive than 3D motion controllers in general.

EDIT: Sony has been rather diligent to get developers feedback and share technologies. Something like a natural interface project will need a coordinated and consistent user view and experience. All these interviews mentioned nothing of that sort.

I believe only Shuhei briefly talked about Sony's effort to consolidate its controller strategy (e.g., How to use new controller with SIXAXIS together ? Is there a basic gestures/vocabulary for common actions ?). This is a huge area of work that I hope they reveal to us more. It'd be like a "beta test" or "demo" of the alternate experiences.
 
His point is incorrect though. There's nothing bad about a mouse's accuracy - just ask all the PC FPSers who hate console controllers! The problem with the mouse is not old moving abll technology (and who uses that these days?) but learning new motor skills. Those same skills were learned to handled a pen and write your name, but a long time ago and at school and people have forgotten how much effort it took. Thus PS3 motion is no more accurate than a mouse (in fact it's far less accurate, as the mouse is accurate to many hundreds of points per inch, tiny movements). It's just easier to use for most people.

Yup, he's not explained the PSmote's advantage very accurately at all. Ironic really.

Interesting some of the examples given in the interview of uses for the light. They mention using it as muzzle flare; that kind of implies some on/off lighting doesn't it? Besides the epilepsy risk there, isn't the point that the ball is the chief source of the precision attained? Maybe he meant some sort of colour change, rather than on/off - starting from a dark colour.

It's not something I'd really thought about though, despite it actually being the first thing they showed the controller doing. I can imagine things like the ball glowing red when your weapon overheats (matron) or flashing when you need to reload, or a pulsing colour change speeding up when an unseen enemy approaches, or even just changing colour when you're pointing at something you can interact with.
 
Yup, he's not explained the PSmote's advantage very accurately at all. Ironic really.

Interesting some of the examples given in the interview of uses for the light. They mention using it as muzzle flare; that kind of implies some on/off lighting doesn't it? Besides the epilepsy risk there, isn't the point that the ball is the chief source of the precision attained? Maybe he meant some sort of colour change, rather than on/off - starting from a dark colour.
Well, by switching the light off for a fraction of second, they can emulate recoil as well. :)
 
Last edited by a moderator:
Back
Top