PlayStation Camera: What are the benefits?

If there were a gameplay element or a content market for stereoscopic video, like if 3D TVs and monitors were actually all the rage, I could imagine something neat like using the camera to make little 3D videos, like a stop-motion LEGO video. It'd probably be a massive can of worms to open in allowing gamers to make content and post it on those social media and video site accounts, though.

That, at least, would be something where a more advanced, accurate, and reliable setup with single color and depth-only camera would be at a disadvantage.
Maybe if a VR headset actually became an accessory, it at least would make a difference for some kind of AR game.
The case where the camera's input somehow can feed back as-is to a pair of human eyes is the one case I can stretch my imagination to even entertain as a point where it is not seriously outclassed.
 
An odd thing would be if the auxiliary port's use of USB3 meant it also had internal support for USB3's host-to-host communication features. Since it probably feeds into hardware dedicated to the video stream, that level of flexibility doesn't seem likely to me.
We could debate how large the population of people willing to buy another PS4 or a PC for a local "cloud" compute resource would be relative to the population that is going to embrace a camera Sony isn't doing much to back.
 
Last edited by a moderator:
That's good enough for computer vision and robots, which is where depth sensing is really been developed, but there's a load of noise and black spots for a game. Trying to isolate all the content at a distance between 45% and 55% is going to find a load of samples that shouldn't be present, and miss a load of points that should be included.

Filtering the results to get solid depth data would likely lead to noisy margins instead of clean cutouts.

Perhaps the reason 3D hasn't been shown much is it's really not ready, but the dual-camera setup barely costs more and provides better 2D options, so that's why Sony went that route, and bunged in 3D because they could without too much trouble.
 
I suspect lighting is the key difference. Computational power may not be a show stopper.

Things like background removal will be flaky (See KungFu Live)

Large motion detection, like dancing and boxing, should be ok after calibration. During calibration, they should be able to estimate our limbs' length and joint location, assuming good lighting.

Instant and precision control may require a controller like Move. Titles like Tumblr and BeatSketcher may be tricky otherwise.
 
An odd thing would be if the auxiliary port's use of USB3 meant it also had internal support for USB3's host-to-host communication features. Since it probably feeds into hardware dedicated to the video stream, that level of flexibility doesn't seem likely to me.
We could debate how large the population of people willing to buy another PS4 or a PC for a local "cloud" compute resource would be relative to the population that is going to embrace a camera Sony isn't doing much to back.

Yes, this is one of the questions in my mind. Whether Sony will make a family of home theater products to take advantage of PS4 (or augment it). The AUX port looks like a good "back door".

Both the consoles can also use the gigabit network for large granularity distributed processing, but that's less interesting to me. :p

If there were a gameplay element or a content market for stereoscopic video, like if 3D TVs and monitors were actually all the rage, I could imagine something neat like using the camera to make little 3D videos, like a stop-motion LEGO video. It'd probably be a massive can of worms to open in allowing gamers to make content and post it on those social media and video site accounts, though.

That, at least, would be something where a more advanced, accurate, and reliable setup with single color and depth-only camera would be at a disadvantage.
Maybe if a VR headset actually became an accessory, it at least would make a difference for some kind of AR game.
The case where the camera's input somehow can feed back as-is to a pair of human eyes is the one case I can stretch my imagination to even entertain as a point where it is not seriously outclassed.

YouTube and Sony still owe us 3D video channels.

I have a 3D camera. Would love to build up a good library of 3D family media.
 
That's good enough for computer vision and robots, which is where depth sensing is really been developed, but there's a load of noise and black spots for a game. Trying to isolate all the content at a distance between 45% and 55% is going to find a load of samples that shouldn't be present, and miss a load of points that should be included.

Filtering the results to get solid depth data would likely lead to noisy margins instead of clean cutouts.

Perhaps the reason 3D hasn't been shown much is it's really not ready, but the dual-camera setup barely costs more and provides better 2D options, so that's why Sony went that route, and bunged in 3D because they could without too much trouble.

Well the PS4 isn't trying to navigate an arbitrary space or track arbitrary objects in 3D. It can calibrate with background removal and you are most likely trying to track humans, which are pretty well known. You can greatly simplify the problem with these constraints and the big issues would be light and good calibration. I would think skeletal or limb tracking would be good enough for spastic motion games, ones where the players can hardly tell the difference from an interactive game and jumping around in font of a video.

Anything that needs precision probably needs a controller, as we learned with the Kinect.
 
That's true. You can easily calibrate a background by having the player stand on one side of the screen and then the other, and combine the two halves (preferably the integral over lots of frames). You could then compare only non-background depths, rather than trying to isolate a depth from a constant reading, and using anatomy extrapolate skeleton info. I don't rate it's chances being great though - nothing ever works as well in practise as the theory sounds!
 
Just pointing out the obvious that these are from 2009.
I think I'm missing your point. Are you saying that the tech can't be even better with the stereoscopic cameras and with a higher resolution? Are you trying to say work has completely stopped in these fields by Sony (and others)?
 
Thank you, Patsu! I could not find it, under the incorrect search phrases I used. :oops: However, I did find some other videos in the process (done with the PS3's PS Eye).

Face-tracking Library (w/ Augmented Reality)

http://www.youtube.com/watch?v=gOtPVof2K94&feature=youtu.be&noredirect=1

Voice Recognition Library

http://www.youtube.com/watch?v=5TaUsSy-f0Y&feature=youtu.be&noredirect=1

Edit: will add to OP

There are a lot of camera-based R&D projects and libraries. I remember I gathered them in an SPU thread.
 
I think I'm missing your point. Are you saying that the tech can't be even better with the stereoscopic cameras and with a higher resolution? Are you trying to say work has completely stopped in these fields by Sony (and others)?

Here's precisely my problem, if you have new videos showing new tech about the PS4 camera, just post them. But instead you put something from 4 years ago and make implications they should be better by now.

To put in perspective, you are basically posting videos of Killzone 2 and say Killzone Shadow Fall will look better than this, while it's true it says nothing about Shadow Fall.
 
Last edited by a moderator:
If it's important enough, Sony will give or loan those researchers free PS4 devkits to port their libraries to PS4.
 
In terms of 'silhouette extraction' or background removal, AFAIR there's a dance game at the PS4 event - so maybe some news will appear?
 
I think it's Dance Central ? Unless the title shows player footage but swap the background, Dance Central should not need background removal.

I am under the impression that most motion games tracks moving objects. They don't really care about the background.
 
I am under the impression that most motion games tracks moving objects. They don't really care about the background.

For full-body tracking, AFAIR the idea is to remove the background, then analyze the silhouette. Sony are either very confident in the quality of a camera, or have found a reliable method for 2 cameras.

Anyway, the previous PSeye was highly regarded as a camera, it's going to be interesting to see what this unit can do.
 
Back
Top