PlayStation 4 Camera driver released ( PS4EYECam )

onQ

Veteran
bigboss a member of the PS2Reality team has released PS4EYECam, a driver for the PlayStation 4 Camera, which will of course allow you to use your PS4 Camera on your PC/Mac, here is a sample image, followed by a quote from the developer:

YDkidnp.png



What is PS4EYECam?
PS4EYECam is a Sony PlayStation 4 Camera driver implementation reference.
The driver is using libusb to handle USB communications.
Part of code was based on PS3EYEDriver.
Payload parsing stuff parts from Linux kernel source(gspca).
UVC video control from libuvc.
Boot and initialization from dumped PlayStation 4 Camera firmware ps4eye.
Other parts and research based on my experience adding support for different usb devices for Sony PlayStation 2 and Sony PlayStation 3 consoles (ps2eyetoy.irx,ps2mic.irx,ps3kinect.sprx,etc.

Source

Follow bigboss on Twitter
@psxdev


A group/Team/Person known as PS4Eye also had a similar project, but it has not been updated in several months, that project did have help from bigboss, i am not quite sure how similar the two projects are, but if you want't to check how to convert your PS4 Camera to USB, then check their blog below:
http://ps4eye.tumblr.com/


http://playstationhax.it/released-ps4eyecam-a-ps4-camera-driver/
 
Followed the link to his Twitter feed. Suggests the Depth Stream is indeed a depth buffer coming from the camera. That'd suggest on-board processing of stereo image for depth.
 
Followed the link to his Twitter feed. Suggests the Depth Stream is indeed a depth buffer coming from the camera. That'd suggest on-board processing of stereo image for depth.

Yeah it has a bridge processor OV00580 from OmniVision that I haven't been able to find much info on connecting the 2 cameras ( OV9713 or OV9714 ? )

Chipworks has die shoots of the bridge processor for sale for $2,500 & I seen a supplier selling the chips for $5 I think. That's all the information I have seen on the image processor.

OV00580-B21G-1C_OV580_154153_diemrk.jpg


https://chipworks.secure.force.com/...artID=e14444e7-5a06-4f26-9217-326f5ff83d0c&g=


There is a 3D scanning tablet using the OV9714 camera which is close to or the same sensor that's in the PS4 camera but it's being used with a IR projector.

AQUILA.jpg


Mantis-Visions-Aquila-tablet_4.jpg



http://www.tomsguide.com/us/mv4d-3d-scanning-tablet-avatars,news-19562.html
http://www.greenbot.com/article/268...googles-project-tango-tech-to-the-masses.html

It is shame that PS4 camera does not have standard USB port.

There is a howto guide on how to convert the cable to USB 3.
 
Last edited by a moderator:
Yep. Just like Kinect 1.

But that's with the tablet using one camera and a IR projector. PS4 is using stereo vision without the IR projector. Unless the red LED is somehow a Near IR projector (which is something I been wondering for a while)
 
Look like the bridge image processor OV580 is also the bridge image processor in the new Leapmotion sensor but leapmotion is using 2 OV7251 cameras.




May 3, 2014
Leap Motion Camera
Leap Motion Camera
ov580-0214.bin

Leap files in the Linux folder (/ usr / share / Leap) found a binary file ov580-0214.bin

Binary editing software Bless Hex Editor is open, find two useful string OV580 and OV7251 , OmniVision's camera products. Leap Motion with the camera should be one of the two.


About OV7251:

The official specification documents found by searching OV7251:

http://www.ovt.com/download_document.php?type=sensor&sensorid=146

Leveraging the Industry's Smallest global shutter Pixel, the black and
white OV7251 is capable of Capturing VGA (640 × 480) resolution video at
100 Frames per Second (fps), QVGA (320 × 240) at 240 fps with binning, and
QQVGA (160 × 120) at 480 fps with binning and skipping. The OV7251's High
frame Rates make it an Ideal Solution for low-Latency Machine Vision
Applications.

A black and white camera to capture images in VGA (640 × 480) is at 100fps, QVGA (320 × 240) next to 240fps, QQVGA (160 × 120) next to 480fps.

http://www.mitgai.net/2014/05/human-computer-interaction/leap-motion-camera.html#more-352
 
The red LED is just a red LED... It's far too weak to project anything these tiny consumer cameras can pick up (you'd probably need something on the sensitivity level of a supercooled FLIR to pick up the illumination of tha tiny front LED...) It doesn't project a dot pattern either, so it doesn't really matter that it's really weak. :)
 
The red LED is just a red LED... It's far too weak to project anything these tiny consumer cameras can pick up (you'd probably need something on the sensitivity level of a supercooled FLIR to pick up the illumination of tha tiny front LED...) It doesn't project a dot pattern either, so it doesn't really matter that it's really weak. :)

If it's Near IR you wouldn't be able to see it with your eyes so it could be sending out pulses of IR to be reflected back to the camera to give depth information. It doesn't have to be structured light. But I don't think that's the case I think it's just stereo vision that's using the lighting from your room & sometimes the lighting from your TV.


I wonder if we will see any games using the 3D scanning to insert your real objects in the game or even let you scan your room into the game.


NBA 2K will have face scanning.


 
Last edited by a moderator:
How? How can intermittently filling a scene with flat, front-on illumination provide any depth data?

By measuring the the time that it take for the light to hit the object.


Also the 2 cameras can be set to different exposures or something so there will be a difference in illumination from the pulse between the 2 cameras giving you depth.
 
Last edited by a moderator:
You're basically describing a TOF camera, which I'm pretty sure the PS Eye isn't. It needs specialized sensor and electronics.
 
That's been covered before. You need a specific TOF sensor to do that, like Kinect 2. It's not possible for PS4's camera to use a pulsed light for measuring depth. If it was a feature, Sony would have mentioned it (and sourced a TOF sensor from somewhere).

I'm not trusting the camera to output a depth buffer. Computer vision is a demanding process. Here's what Google's using : http://techcrunch.com/2014/02/20/in...-at-the-heart-of-googles-project-tango-phone/

and https://www.youtube.com/watch?v=vsQJ4qkCl1k

Claims 1 TF of processing power in the article. I doubt PS4's camera has anything like that, and the 'depth buffer' will be a preprocessed image saving some work for the system to do, but it'll still have to do a lot with it to get a sense of 3D vision.

Edit: the Google/Movidius solutions are for single-lens depth detection, so I guess with stereo images it could be a lot less demanding, but I still doubt we get a full depth buffer out the camera. Hopefully it won't be too long before that's (dis)proven!
 
You're basically describing a TOF camera, which I'm pretty sure the PS Eye isn't. It needs specialized sensor and electronics.

That's already established. I said I already know that it's using stereo vision I was just explaining how the beam of IR could be used if it was there.

That's been covered before. You need a specific TOF sensor to do that, like Kinect 2. It's not possible for PS4's camera to use a pulsed light for measuring depth. If it was a feature, Sony would have mentioned it (and sourced a TOF sensor from somewhere).

I'm not trusting the camera to output a depth buffer. Computer vision is a demanding process. Here's what Google's using : http://techcrunch.com/2014/02/20/in...-at-the-heart-of-googles-project-tango-phone/

and https://www.youtube.com/watch?v=vsQJ4qkCl1k

Claims 1 TF of processing power in the article. I doubt PS4's camera has anything like that, and the 'depth buffer' will be a preprocessed image saving some work for the system to do, but it'll still have to do a lot with it to get a sense of 3D vision.

Edit: the Google/Movidius solutions are for single-lens depth detection, so I guess with stereo images it could be a lot less demanding, but I still doubt we get a full depth buffer out the camera. Hopefully it won't be too long before that's (dis)proven!


I was still talking about using the 2 cameras for triangulation but having the cameras at different exposures so that the image from the right & left cameras would show a difference from the time that it take to illuminate them with the pulse of light.
 
Sony has a patent using pretty much the same tech as the Mantis Vision tablet & Project Tango for a Playstation Move /Pen like controller.Seems that a normal IR camera & IR projector might be the way to go for small low power 3D scanning / tracking.

 
Last edited by a moderator:
If it's Near IR you wouldn't be able to see it with your eyes so it could be sending out pulses of IR to be reflected back to the camera to give depth information.
It's a red LED (which we can visually identify as emitting the color red), so it's obviously not a near IR LED. Also, it's far, far, FAR too weak to be a pulsed IR LED for a depth sensor. Look at the huge enormous LEDs in the kinect2, they're so powerful they actually need active cooling.

Without really powerful output you can't gather enough light to create any meaningful structure on the input side. Remember the inverse square law? As weak as the output of that LED is (a few mWs of CONTINUOUS power at most, much less pulsed), far less would return to a sensor after spreading out in the room, some of it scattering or partially absorbing into foreground objects, and then reflecting back to a sensor.

So your wild theory is clearly unworkable. It's pretty well known and established the PS4 eye is just two standard webcams slung together in one plastic shell (with an array mic thrown in for good measure.)
 
It's a red LED (which we can visually identify as emitting the color red), so it's obviously not a near IR LED. Also, it's far, far, FAR too weak to be a pulsed IR LED for a depth sensor. Look at the huge enormous LEDs in the kinect2, they're so powerful they actually need active cooling.

Without really powerful output you can't gather enough light to create any meaningful structure on the input side. Remember the inverse square law? As weak as the output of that LED is (a few mWs of CONTINUOUS power at most, much less pulsed), far less would return to a sensor after spreading out in the room, some of it scattering or partially absorbing into foreground objects, and then reflecting back to a sensor.

So your wild theory is clearly unworkable. It's pretty well known and established the PS4 eye is just two standard webcams slung together in one plastic shell (with an array mic thrown in for good measure.)

Just because you see red coming from the LED doesn't mean it's not Near IR because the LED could be a 770 nm NIR LED giving off a weak red signal. Just because Kinect has a big LED array that needs to be cooled off doesn't mean that every IR sensor needs that just to sense depth you should know that just from the Mantis Vision IR projector that's being used in Project Tango & the MV4D tablet. the sensor in the MV4D tablet is pretty much the same as what is suspected to be in the PS4 camera the specs of ov9713 & ov9714 are pretty much the same.

(Mantis Vision IR projector)
1hDDnsSM6mgZN6Gy.medium



I already know that the PS4 camera is using stereo vision. but sometimes things are more than they appear to be.

By the way here is what I believe to be the patent for the PlayStation 4 camera depth processing

Image processing apparatus and image processing method

JPOXMLDOC01-appb-D000005.png


JPOXMLDOC01-appb-D000008.png
 
Last edited by a moderator:
Back
Top