PS4 SDK 2.0 brings big improvements to Camera functions & more.

Doesn't kinect also need gpu power?

Yes, but not 10-5% of the GPU resources only for computing the depth image. Even that 10% of GPU reservation on XB1 wasn't entirely for Kinect (motion tracking):

Digital Foundry: Microsoft returning the Kinect GPU reservation in the June XDK made a lot of headlines - I understand you moved from a 900p to a 912p rendering resolution, which sounds fairly modest. Just how important was that update? Has its significance been over-played?

Oles Shishkovstov: Well, the issue is slightly more complicated - it is not like 'here, take that ten per cent of performance we've stolen before', actually it is variable, like sometimes you can use 1.5 per cent more, and sometimes seven per cent and so on. We could possibly have aimed for a higher res, but we went for a 100 per cent stable, vsync-locked frame-rate this time That is not to say we could not have done more with more time, and per my earlier answer, the XDK and system software continues to improve every month.

http://www.eurogamer.net/articles/d...its-really-like-to-make-a-multi-platform-game
 
Why you keep saying that Kinect 2 isn't capable to track at 60fps, when we know it can?

http://i.imgur.com/ZDQLITS.jpg

And how can you compare the quality of detailed 512x424p depth image of Kinect 2 which is a direct result of using a top notch hardware solution, with 320x200p depth image that produced by triangulation?

And why Sony used a (or two?) 640x400p image(s) as input to make a 180x100p depth image as output? Is it due to a restrictions on their calculation method or they choose it to reduce the cost of the calculations? Why do you think that they can go for higher quality depth image using more Gflops? What about latency?


How would I know that Kinect 2.0 for Xbox One is capable of 60FPS when it's listed at 30FPS? & I'm not comparing them I just stated that using the GPU to get 60FPS 160 x 100 could result in a better end product. Meaning that it could be a situation when the 60FPS is more important to the gameplay than the 512 x 424 dots of depth. And the answer to your last question is because they said it was scalable.
 
How would I know that Kinect 2.0 for Xbox One is capable of 60FPS when it's listed at 30FPS?

We had this conversation before:

https://forum.beyond3d.com/posts/1801853/
https://forum.beyond3d.com/posts/1801856/

I'm not comparing them I just stated that using the GPU to get 60FPS 160 x 100 could result in a better end product. Meaning that it could be a situation when the 60FPS is more important to the gameplay than the 512 x 424 dots of depth.

Kinect 2 hardware is capable of 60fps, and they can enable it for the software if they want to. Also the last demo that I saw from PS4 camera wasn't smoother than Kinect 2 at all. Just watch this video (from 1:33-1:44) again:


It doesn't seems smother than 30fps Kinect, probably due to it's higher latency.

And the answer to your last question is because they said it was scalable.

They said it's scalable to decrease the resolution, not increasing it.

The output depth data generated has a resolution of of 160 x 100 dots in 1.5 milliseconds. The process is scalable since it doesn’t use dedicated hardware, so it can be made cheaper by lowering resolution.

http://www.dualshockers.com/2014/11...ew-tech-and-features-game-developers-can-use/
 
We had this conversation before:

https://forum.beyond3d.com/posts/1801853/
https://forum.beyond3d.com/posts/1801856/



Kinect 2 hardware is capable of 60fps, and they can enable it for the software if they want to.
Reading that still doesn't paint a picture of Kinect 2 being able to track at 60 FPS, it says a max of 60FPS but typical 30FPS as if it might spike up to 60FPS but average 30FPS for the most part.


Also the last demo that I saw from PS4 camera wasn't smoother than Kinect 2 at all. Just watch this video (from 1:33-1:44) again:


It doesn't seems smother than 30fps Kinect, probably due to it's higher latency.

What you are looking at in that video is Softkinitic's HTLib it has nothing to do with Sony's LibDepth / LibHand /LibFace.

Also that actually does show better real time avatar free movement than what has been shown with Kinect 2.0. see how they are running back and forth & actually showing depth tracking?


They said it's scalable to decrease the resolution, not increasing it. http://www.dualshockers.com/2014/11...ew-tech-and-features-game-developers-can-use/

That was Dualshocker's words this is a direct translation.

"
The session, from the image data input of 640 × 400 dots per camera, and performs demonstration of outputting the depth data of 160 × 100 dots, it was shown that you are dropped into to 1.5ms.

 PS4 advantage of software processing by the APU, it is a flexible and scalable. Because you do not use a special depth sensing hardware, go up cheaper in cost. However, there is also a weakness.

 LibDepth of PS4 is, for calculating the depth from the images from the camera, unlike the depth measurement of the Microsoft Kinect, not irradiated from the camera side of the light source. Current Kinect is to use the reflection of infrared rays irradiated from the camera to the measurement of the depth, is not affected by the illumination location to be measured. In contrast, the process of libDepth of PS4 is to use only the image information from the camera, there is a weak point that the accuracy of the dark in the depth measurement falls.

 In early SDK, and that in the lighting of about 10lux, was not obtained a practical depth data. 10lux is very dark, in the living room of Europe and the United States is often for problems if you are playing the game in the darkness of this degree. However, in the current SDK, that algorithm is improved sufficient depth has to be calculated."
 
Last edited:
Its not really an advantage compared to Kinect. The advantage, in real terms, is Sony didn't need to spend on expensive hardware. In terms of functionality, PS4's solution is pretty much universally inferior. It is massively inferior resolution, massively inferior low-level light performance (no matter how improved it is), massively inferior accuracy for skeletal tracking, and massively inferior for background removal and player composition with the game. Perhaps there's a framerate advantage allowing 60 fps versus 30, but the quality of 3D data is likely going to result in lots of false interpretations for image recognition - we saw how much Kinect 1 struggled to match people to low res 3D data. The amount of processing and smoothing that might be needed will add lag to the input. It's quite possible that Kinect at 30 fps will be lower latency and more responsive than LibDepth on PS4.

mosen said:
And why Sony used a (or two?) 640x400p image(s) as input to make a 180x100p depth image as output?
One advantage of downsampling is noise reduction. At 16 samples per pixel, this will be significant. And the smaller buffers means faster processing.
 
Reading that still doesn't paint a picture of Kinect 2 being able to track at 60 FPS, it says a max of 60FPS but typical 30FPS as if it might spike up to 60FPS but average 30FPS for the most part.

That's not how it works or what it means. Why should I or you suppose that Kinect might spike up to 60fps for short period of time? Or go down to produce average 30fps tracking. It just don't make sense.



What you are looking at in that video is Softkinitic's HTLib it has nothing to do with Sony's LibDepth / LibHand /LibFace.

Also that actually does show better real time avatar free movement than what has been shown with Kinect 2.0. see how they are running back and forth & actually showing depth tracking?

Yes, but they used the same hardware. Are you serious in saying that it shows better real-time avatar movement than Kinect 2.0? Because it seems even worst than Kinect 1.0 to me.

That was Dualshocker's words this is a direct translation.

"
The session, from the image data input of 640 × 400 dots per camera, and performs demonstration of outputting the depth data of 160 × 100 dots, it was shown that you are dropped into to 1.5ms.

 PS4 advantage of software processing by the APU, it is a flexible and scalable. Because you do not use a special depth sensing hardware, go up cheaper in cost. However, there is also a weakness.

 LibDepth of PS4 is, for calculating the depth from the images from the camera, unlike the depth measurement of the Microsoft Kinect, not irradiated from the camera side of the light source. Current Kinect is to use the reflection of infrared rays irradiated from the camera to the measurement of the depth, is not affected by the illumination location to be measured. In contrast, the process of libDepth of PS4 is to use only the image information from the camera, there is a weak point that the accuracy of the dark in the depth measurement falls.

 In early SDK, and that in the lighting of about 10lux, was not obtained a practical depth data. 10lux is very dark, in the living room of Europe and the United States is often for problems if you are playing the game in the darkness of this degree. However, in the current SDK, that algorithm is improved sufficient depth has to be calculated."

I'm confused, what's the meaning of "go up cheaper in cost"? It means going more cheaper in cost? Or going less cheaper in cost (more expensive)?!
 
Lower resolution, massively lower quality, less consistent in dark rooms, more expensive, and higher latency.

Am struggling to see how this is supposed to be better than Kinect.
 
To be fair, onQ has said in some cases where lower latency is preferable -
some games might only need to keep track of your hand so being able to track your hand at 60FPS could be better than tracking it at 30FPS.
...
Can you really say that 512 x 424 at 30FPS will always be better than 320 x 200 at 60FPS for depth sensing?
...
...it could be a situation when the 60FPS is more important to the gameplay than the 512 x 424 dots of depth.
So yes, if devs can't get 60 fps on Kinect and can get 60 fps on PS4 and that makes a difference in whatever game they are creating (juggling simulator pro?) and PS4's resolution isn't a disadvantage, then PS4's implementation of computer vision could be better.
 
Its not really an advantage compared to Kinect. The advantage, in real terms, is Sony didn't need to spend on expensive hardware. In terms of functionality, PS4's solution is pretty much universally inferior. It is massively inferior resolution, massively inferior low-level light performance (no matter how improved it is), massively inferior accuracy for skeletal tracking, and massively inferior for background removal and player composition with the game. Perhaps there's a framerate advantage allowing 60 fps versus 30, but the quality of 3D data is likely going to result in lots of false interpretations for image recognition - we saw how much Kinect 1 struggled to match people to low res 3D data. The amount of processing and smoothing that might be needed will add lag to the input. It's quite possible that Kinect at 30 fps will be lower latency and more responsive than LibDepth on PS4.

I guess we will have to wait to see real world results.



One advantage of downsampling is noise reduction. At 16 samples per pixel, this will be significant. And the smaller buffers means faster processing.

Actually I think it goes even deeper than that with pixel binning the 640 x 400 pixels are probably made up of 2x2 super pixels from the 1280 x 800 pixels of the sensor. so with the 16 samples per pixel it's like each of the 160 x 100 pixels is getting data from 64 pixels each.

That's not how it works or what it means. Why should I or you suppose that Kinect might spike up to 60fps for short period of time? Or go down to produce average 30fps tracking. It just don't make sense.

Maybe because the hardware processor isn't strong enough to process depth at 60FPS but it's able to use the sensor at 60 FPS for active IR mode.

Yes, but they used the same hardware. Are you serious in saying that it shows better real-time avatar movement than Kinect 2.0? Because it seems even worst than Kinect 1.0 to me.
It's a software solution so different software can achieve different results.
Would you like to show me Kinect 2.0 doing better real time avatar free movement than that? because I really haven't seen it. show me where there is a example of Kinect 2.0 moving a avatar freely showing full depth tracking. Seems like PS4 camera is doing 3D avatar controls while Kinect 2.0 is doing more of 2.5D avatar controls. That could just be the difference between the Softkinetic skeleton tracking solution & Kinect's standard skeleton tracking solution.


I'm confused, what's the meaning of "go up cheaper in cost"? It means going more cheaper in cost? Or going less cheaper in cost (more expensive)?!

It's translated but they seem to be talking about it being cheaper for Sony because they are using the GPU & not a extra hardware chip, and because they are not using hardware they have the advantage of the tracking being more flexible & scalable using software but it also has the disadvantage of only using camera data & not having it's own light source like Kinect.
 
Lower resolution, massively lower quality, less consistent in dark rooms, more expensive, and higher latency.

Am struggling to see how this is supposed to be better than Kinect.

it doesn't have to be better than Kinect to have situations where it work better than Kinect.
 
To be fair, onQ has said in some cases where lower latency is preferable -
So yes, if devs can't get 60 fps on Kinect and can get 60 fps on PS4 and that makes a difference in whatever game they are creating (juggling simulator pro?) and PS4's resolution isn't a disadvantage, then PS4's implementation of computer vision could be better.

I think it's a sign of MS's TVTVTVTSPORTS focus at the beginning of the generation that they didn't expose the Kinect sensor's 60 fps abilities...

I'm not sure 60 fps automatically leads to lower latency in this case. It could, but that would seem to depend on how quickly Kinect outputs its depth info. If you just want 512 x 424 depth, then assuming the ToF sensor in the Kinect is as fast as a regular sensor you simply transfer that over usb and there it is. On the Eye you transfer two 640 x 400 images over USB, buffer them, wait for the appropriate point in your update to do the image processing, do it (another 1.5 ms), and then you can use it.

Depending on your scheduling I could easily see that adding a frame in latency.
 
Shame this thread quickly went the way the it did. There's actually some interesting info in that link and the original Watch Impress article it was sourced from. They've apparently integrated some of what we saw in the AR Dynamic Lighting they previously demonstrated into one of the new libraries they've added. They also discuss some of the physics simulation libraries that have been added/updated (CPU and GPU).
 
I guess we will have to wait to see real world results.
Probably will never see. If a game is locked to 30 fps on XB1, there's no reason for the devs to target 60 fps on PS4. On the tiny handful of games that are going to implement motion controls.

It's a software solution so different software can achieve different results.
Would you like to show me Kinect 2.0 doing better real time avatar free movement than that? because I really haven't seen it. show me where there is a example of Kinect 2.0 moving a avatar freely showing full depth tracking. Seems like PS4 camera is doing 3D avatar controls while Kinect 2.0 is doing more of 2.5D avatar controls. That could just be the difference between the Softkinetic skeleton tracking solution & Kinect's standard skeleton tracking solution.
This sounds like mindless fanboy drivel. If I didn't know you better, I'd accuse you of being a SoftKinetic shill. 2.5D my ass! MS had decent 3D tracking with frickin' original Kinect!

The PS4 skeleton tracking (not using Sony's SDK 2.0 lib) is clearly no better than Kinect solutions and the depth data is clearly rubbish. Compare that to Kinect 2:

Even if you've never seen Kinect 2 managing 3D movement, if you can imagine Sony can add 120 Hz computer vision to their system based on very little, you should be able to see there's zero limitations on Kinect 2 handling 3D. But if you don't believe me, watch the vid. And importantly, if you understand the tech, Kinect 2 is going to be far better at tracking hands at 3D motions than PS4 is, especially if PS4 has some lousy contrast situations.

Also, there's nothing stopping devs using Kinect's depth data however they want so PS4 doesn't have a 'software advantage'. the software side of PS4's system is about extracting depth data from stereo cameras. The end result is a depth map, which Sony use to produce skeleton tracking. MS do the same, only they get their depth map from the camera's hardware. What can be done with that depth data is identical across machines (as long as the data is sufficient quality. Tracking fingers and thumb on PS4 like Kinect2 can track is unrealistic at such low res).

I'm not sure 60 fps automatically leads to lower latency in this case...
Yeah, I mentioned that. All videos of skeleton tracking, no matter the system, seem to have the same general lag. However, there are some custom jobs that look really low latency like this one on old Kinect, although it's using small movements so it's much harder to tell:
 
Maybe because the hardware processor isn't strong enough to process depth at 60FPS but it's able to use the sensor at 60 FPS for active IR mode.

Kinect 2 uses many samples per dot/pixel and chooses best of them:

Remember the part about the high modulation rate of the light source? 10s of MHz vs a pixel count of .217Mp? See a disparity? Remember how the dual ported pixels can take an A and a B image for every frame? The sensor can actually do much more than that, it can take multiple A and B exposures per frame. O’Connor said that the sensor can determine exposure on a per-pixel basis, quite the technical feat. This allows the depth camera to have a dynamic dynamic range in a way that most regular cameras can only dream of. More importantly it solves the dynamic range requirements without throwing expensive hardware at the problem.

http://semiaccurate.com/2013/10/16/xbox-ones-kinect-sensor-overcomes-problems-intelligence/

It's far more capable than 30fps and they can sacrifice the quality of pixels for better-latency/higher-frame-rate and it's far more in agreement with the "max 60fps (typical 30fps)" comment than your suggestion.

It's a software solution so different software can achieve different results.
Would you like to show me Kinect 2.0 doing better real time avatar free movement than that? because I really haven't seen it. show me where there is a example of Kinect 2.0 moving a avatar freely showing full depth tracking.

As Kinect 2, which is software base for motion tracking and other stuffs.
There are already numerous videos for Kinect 2 skeleton tracking. The movement with Softkinitic's HTLib seems slow-motion to me while it's not possible to even see the display latency (which is visible in Kinect videos).


Seems like PS4 camera is doing 3D avatar controls while Kinect 2.0 is doing more of 2.5D avatar controls. That could just be the difference between the Softkinetic skeleton tracking solution & Kinect's standard skeleton tracking solution.

I played with Kinect 1 for years and it tracks players in 3D and Kinect 2 shouldn't be different, how you reached to this point to say it uses 2.5D avatar control?
 
Shame this thread quickly went the way the it did. There's actually some interesting info in that link and the original Watch Impress article it was sourced from. They've apparently integrated some of what we saw in the AR Dynamic Lighting they previously demonstrated into one of the new libraries they've added. They also discuss some of the physics simulation libraries that have been added/updated (CPU and GPU).

I tried to include everything in the OP but it was way too much & I had to keep removing info.
Probably will never see. If a game is locked to 30 fps on XB1, there's no reason for the devs to target 60 fps on PS4. On the tiny handful of games that are going to implement motion controls.

This sounds like mindless fanboy drivel. If I didn't know you better, I'd accuse you of being a SoftKinetic shill. 2.5D my ass! MS had decent 3D tracking with frickin' original Kinect!

The PS4 skeleton tracking (not using Sony's SDK 2.0 lib) is clearly no better than Kinect solutions and the depth data is clearly rubbish. Compare that to Kinect 2:

Even if you've never seen Kinect 2 managing 3D movement, if you can imagine Sony can add 120 Hz computer vision to their system based on very little, you should be able to see there's zero limitations on Kinect 2 handling 3D. But if you don't believe me, watch the vid. And importantly, if you understand the tech, Kinect 2 is going to be far better at tracking hands at 3D motions than PS4 is, especially if PS4 has some lousy contrast situations.

Also, there's nothing stopping devs using Kinect's depth data however they want so PS4 doesn't have a 'software advantage'. the software side of PS4's system is about extracting depth data from stereo cameras. The end result is a depth map, which Sony use to produce skeleton tracking. MS do the same, only they get their depth map from the camera's hardware. What can be done with that depth data is identical across machines (as long as the data is sufficient quality. Tracking fingers and thumb on PS4 like Kinect2 can track is unrealistic at such low res).

Yeah, I mentioned that. All videos of skeleton tracking, no matter the system, seem to have the same general lag. However, there are some custom jobs that look really low latency like this one on old Kinect, although it's using small movements so it's much harder to tell:


I've seen that video over & over again and it's still showing what I'm talking about when I say that the avatar controls seem more like 2.5D than what was shown with the Softkinectic demo with them running back & forth. I know that Kinect is capable of 3D tracking it just seems that your avatar is less mobile in the z plane . Maybe the softkinectic demo is just more exaggerated & the Kinect software is showing your free movements more accurately but the Softkinectic demo seems to show better movement around the room like you can actually move all around in a game and not just up down & side to side with a little depth.
 
Of course the Softkinect demo is just more exaggerated. How else do you explain it? Mosen's video show nicely punching into the screen, and plays can move and kick into the screen, or not. What even is '2.5D' regards motion tracking?!
 
There is no way that you can look at that softkinetic video that you posted and tell me that you don't see the difference in how freely the skeletons move around in 3D compared to what we have seen from Kinect 2.0 so far.


Of course the Softkinect demo is just more exaggerated. How else do you explain it? Mosen's video show nicely punching into the screen, and plays can move and kick into the screen, or not. What even is '2.5D' regards motion tracking?!

Little Big Planet is what I would call 2.5D. it's 3D but it doesn't let you move much in the Z plane.
 
Last edited:
I see no difference. Just because the people are moving more in the forwards/backwards axis (if they are - looks to me just like a wider angle lens in effect), doesn't mean the tech is any more capable. You only need one example of Kinect tracking a person moving forwards to prove it can, and that exists in the posted videos.

Show me a Softkinect clip for PS4 of someone pale wearing a light top against a light background punching forwards with a decent measure of z-velocity...

And also, it doesn't matter. The skeleton tracking algorithm is separate from the depth acquisition method. Softkinect using Kinect 2 would have exactly the same abilities.
 
Back
Top