Old Discussion Thread for all 3 motion controllers

Status
Not open for further replies.
CEDEC 09: Sony Demonstrates Photo and Facial Recognition Technology

http://gamasutra.com/php-bin/news_index.php?story=25108
...explains photo recognition and facial recognition.

Photo matching:
In the demonstration, they showed a close-up photo of a red flower in an open field. Using a convolution filter, they showed how an image can be softened (low-pass), sharpened (high-pass) or detect edges (sobel/laplacian) and a pyramid filter was also shown to demonstrate noise reduction.

Next, they showed three images and ran a comparison match to see which two photos were identical. If they match, the screen would essentially all turn black giving a value of 0. Now this is all just talking about still images. All of the above are utilized in tracking movement, a girl stood in front of the Playstation Eye moving her hands.

...

Facial recognition:
As for facial recognition, the technology may be familiar to many as it has now become a feature in most digital cameras. The process of facial recognition can be divided into four steps. The first, which also takes the most amount of processing time, is face detection. A detection box of at least 20x20 pixel would sweep across the entire space.

... [More about the 4 steps in the article]...

SPU implementation:
With libface, one or more SPU can be used in face processing. For example, setting the parameter to 47 pixels would take 58 milliseconds with one SPU. With multiple SPUs, the process can be sped up and reduced to 15 milliseconds.

For gaming:
Some applications such as avatar-linked facial recognition and pattern recognition were briefly shown. With the former, it would find the alignment of each user's face, so if the user smiles, the avatar on screen will also smile likewise.

The latter is used in mini-games such as "Smile Competition". Users would smile in front of the camera and whoever scores higher wins the match. Although the application focuses on facial recognition, this can also be used in detecting specific objects through higher level detection algorithms.
 

great video!

They are really honest with the technology, no scripted 'behind the curtain' Peter Molyneux Natal fakery.
I expect a lot from the PSeye technology, I have seen headtracking demos which work with only the camera. To use this in a game would be awesome. Also it has a directional microphone array, so the players can identify themselves by waving their hand while saying their name: the system will know whose motions to track.
This can be combined with the regular controller for added gameplay.
Or they can go 'all wii' and make games like KZ2 but with metroids' control scheme :cool:
For Natal I don't see much potential, honestly.
Full body movement imo will always be a gimmick. I cannot imagine it replacing the controller for (all) popular genres, so new gimmick-type games will be where it's aimed at.
The peter molyneux 'realtime demo' ( :LOL: :rolleyes: ) is an example of that.
"kick away the ball" is another example. Not to mention "make a painting".
 
Err.... Natal has very good potential in many markets. It's just more difficult to map its control scheme to traditional video games. I have seen people lie on their bed/couch playing games for hours. It's more a mind stimulation (and thumb twiddling) entertainment, as opposed to full body exercises.

That's why I think Natal's 3D imaging is more interesting/compatible for core gaming. The rest (voice recognition, full body motion) have been done by Wii and PS Eye to varying degree of success. It's much harder to tell the differences between Natal and those games.
 
It looks like some of you guys got Sony prototypes. But what I find very strange is that none here seems to talk about Natal.

And I am pretty sure some of you DO have versions of Natal to work on. Are you hiding something????
 
I spoke to an nVidia manager over the Labor Day weekend. He got to try an early prototype too. Same old feedback we heard before. It's very accurate.

EDIT: Kotaku mentioned a possible use for the GigaPan technology: The Playstation Store. I think it'd be a useful application there without resoughting to scroll bars. But the zooming has to be fast.

Besides "Life with Playstation", the other possible application is Qore (Game and Blu-ray release schedules), and Playstation Home (Calendar of user-organized events).
 
EDIT: Kotaku mentioned a possible use for the GigaPan technology: The Playstation Store. I think it'd be a useful application there without resoughting to scroll bars. But the zooming has to be fast.

Besides "Life with Playstation", the other possible application is Qore (Game and Blu-ray release schedules), and Playstation Home (Calendar of user-organized events).
Imagine a web browser based on that. As you're focusing on an area, it's buffering the linked site, and loading mip mapped images of it as you zoom in. It would require a radical change in content storage and loading, but it'd be a much more interesting and fluid method of navigating the net. It could be interesting to rotate to a side view of your browsing tree and follow the path you've branched on.
 
Heh, very good!

EDIT: by the way, together with two motion controllers, you've got quite a lot to work with as a developer. :D
 
The fozzy logic bars showed how it's 'seeing' the person. I wonder how this compares with MS's face recognition and how they use that identify users?
 
They also showed the Physics Effects SDK in CEDEC 2009:


Develop has an article on it:
http://www.develop-online.net/news/32826/VIDEO-Sonys-PS3-optimised-physics-SDK
(Titanio posted the link on GAF)


Is this the Bullet Physics Library that was ported to Cell ?

Although this has nothing to do with controllers, it is showing how PS3 will move forward as a platform. These calculations are basically 'for free', so even if the graphics will not improve that much (wait till you see the real GT5 or Uncharted 2), the physics will.
 
Although this has nothing to do with controllers, it is showing how PS3 will move forward as a platform. These calculations are basically 'for free', so even if the graphics will not improve that much (wait till you see the real GT5 or Uncharted 2), the physics will.

I posted it here for 2 reasons:
* It's part of the CEDEC 2009 show (where we took the face tracking and voice recognition videos from)
* It is related to the new controller scheme because augmented reality usually requires some form of physics simulation (e.g., for manipulating virtual objects that represent the real world).
 
Status
Not open for further replies.
Back
Top