Controllers and every other aspect of Orbis and Durango

I mean..we have a faster usb connection. .better sensors...faster processors....the 3 year advantage of development problem assessment...and the best we have is 1/3 decrease in latency..the universal largest problem with the original kinect??

It certainly looks like we had a wiiu design philosophy going on with kinect 2....ok maybe not that bad...but were looking at something like 15$ BOM for this thing...

We don't know yet if the sensor is all there is to it. Perhaps there is also better integration with the controller, about which we don't know anything.

Also, there's a new sensor it seems, that may help a lot. And things are far better integrated into the system buswise etc.

I expect quite an improvement regardless.
 
I mean..we have a faster usb connection. .better sensors...faster processors....the 3 year advantage of development problem assessment...and the best we have is 1/3 decrease in latency..the universal largest problem with the original kinect??

It certainly looks like we had a wiiu design philosophy going on with kinect 2....ok maybe not that bad...but were looking at something like 15$ BOM for this thing...

You are looking at the wrong latency. The latency of the camera seeing the scene, plus processing and providing the game with the skeletal data is one thing. Certain games use that data directly, while others smooth (filter) it out or wait for gestures to be completed, which adds delay on top of data.

The first latency, is relatively good (imo) even on current kinect. Here's a video i made back when the sensor was launched (that nui viewer apparently was used at the kinect beta, but it was still avaiable for a short time after the first kinect dashboard was out, so i could download it): http://www.youtube.com/watch?v=yKdX0AeQDpU And from what i understand, that's the thing that was reduced to two frames at 30fps now, which means kinect 2.0 adds no extra delay (at 30fps) compared to a joystick, because no matter how fast the joystick is you always have at least a 2 frame delay (again on 30fps).

The other latency, that on current kinect 1.0 can be gamebreaker is that the data is super noisy (and the joints moves a lot in time, even when you are standing still), so on top of the traditional input you have to use additional filtering/smoothing to have a data good enough to be presented to the player, or if your game is gesture based, you have no other choice but to wait for the player to do the whole gesture, identify it, and them apply the accordingly action. The leak also said this new sensor is more precise, which hopefully means now that the joints do not jump that much from frame to frame, so no filtering (or at least not as much) is necessary, and the perceived latency is lower. For gestures delay i guess it can be solved if the game is design to work with tracking 1:1 at all times rather than tracking to see the gestures (ie. instead of looking for a punch gesture to make the character punch afterwards, map the whole punch movement to the character on screen and calculate whether it hits anything and at what force, but that brings it's own design troubles, though).
 
PS4's stereo cameras appear to be about 3" apart. There was no showcase of body tracking though, and the press release doesn't mention it - it only talks about background removal.
 
PS4's stereo cameras appear to be about 3" apart. There was no showcase of body tracking though, and the press release doesn't mention it - it only talks about background removal.

There will likely be a lot more about that later. I'm sure they have a few things to figure out in that department still, also regarding SDK stuff. And they didn't consider this the moment for it - they estimated their audience as hardcore, not too interested in motion control stuff.
 
That light bar on new dualshock can be motion tracked in same fashion as move controller I suppose.

edit: nvm, I just read official description on playstation blog

"DUALSHOCK 4 was developed alongside a second peripheral, a dual camera which can sense the depth of the environment in front of it and also track the 3D location of the controller via its light bar. "
 
Last edited by a moderator:
A few more details from Sony:
http://www.engadget.com/2013/02/21/sony=playstation-4-eye-works/
Sony's Shuhei Yoshida has dished the dirt on how the company's latest camera accessory will work. The PlayStation 4 Eye comes with a pair of 1,280 x 800 cameras, four microphones and an 85-degree field of view. The two lenses are designed to be used in a variety of ways, including triangulating the 3D space, gesture recognition, Kinect-style body tracking, and in conjunction with accessories like the Wonderbook or DualShock 4 controller. "It's not just a way to identify your player number, it also works like a PS Move," Yoshida said of the new DualShock's light bar. "It's an extension of the PS Move technology that we incorporated into the DualShock so that the camera can see where it is."

The Sony Studios chief used a PS Eye-style AR game as an example, saying that with the original camera, one lens had to do everything. With the new unit, one camera will concentrate on capturing the action and ensuring good picture quality, while the other is dedicated to motion tracking. Another reason that the Move functionality was incorporated into the DualShock is to enable the console know where you're sitting in relation to the TV (and your on-screen character). The company is also aiming to enable users to take 3D pictures and video and store it on the console. As for the microphones in the new Eye and how that'll impact interaction with the PlayStation 4 on a system level, Yoshida wasn't giving up any details. Though he said it'll be incorporated into games (a la Kinect voice commands on Xbox 360 games), he wouldn't give up whether you could use your voice to control the PlayStation 4 on a system level.

We discussed the issue with the PS Eye being used for tracking the Move controller being antithetical to it being used to give a good picture of the player (it wants to have the light stand out as much as possible), and I guess using one camera to track the Move and the other to film the player is one way of solving that ...

My guess would be that if you use the two camera's in conjunction for stereo/depth of field sensing of the full surroundings, you could still use Move but track it in a different (space/positional) way, that would however probably have a higher latency. So my guess is that they're doing these two things side-by-side to have a choice there depending on the application.
 
A few more details from Sony:
http://www.engadget.com/2013/02/21/sony=playstation-4-eye-works/


We discussed the issue with the PS Eye being used for tracking the Move controller being antithetical to it being used to give a good picture of the player (it wants to have the light stand out as much as possible), and I guess using one camera to track the Move and the other to film the player is one way of solving that ...

My guess would be that if you use the two camera's in conjunction for stereo/depth of field sensing of the full surroundings, you could still use Move but track it in a different (space/positional) way, that would however probably have a higher latency. So my guess is that they're doing these two things side-by-side to have a choice there depending on the application.
They also have the flexibility of using one camera for motion tracking at 120 or 240Hz, low resolution, while at the same time having the other camera at 1280x800 60Hz for AR.
 
I think hands free voice recognition will be harder to get right on a worldwide basis.

It may be easier if the new controller's speaker can be turned into a mic.

There is a rumor on bundled mono headset. I don't know if it's true.
 
They also have the flexibility of using one camera for motion tracking at 120 or 240Hz, low resolution, while at the same time having the other camera at 1280x800 60Hz for AR.
That sounds like what they're talking about. Motion tracking needs high frame rate. I hadn't thought about that; it's certainly different. Does kinda imply a distancing from 3D though. You can't buy a camera-based game and be sure it'll have 3D composite as it may use 3D IQ and fast Move tracking.
 
Using 3D cam input for games sounds interesting. Need to see what can be done.

I saw another Sony patent that display 2D capture in a 3D world. Perhaps Sony folks are working along this line at the moment.
 
That sounds like what they're talking about. Motion tracking needs high frame rate. I hadn't thought about that; it's certainly different. Does kinda imply a distancing from 3D though. You can't buy a camera-based game and be sure it'll have 3D composite as it may use 3D IQ and fast Move tracking.

That would certainly provide a large amount of options:

- 'background separation' with 2 cameras. Then use the PSeye libraries to do "2.5D" skeletal analysis... should be adequate for PS4-fit/dancestar etc.

- PS move. As per the current system, but for "mirror" apps you can track the move at a high frame whilst allowing a high resolution mirror image.

- track the gamepad. The controller supposedly has roughly the same sensors as PS move, although there doesn't seem to be any example usage.

- turn the camera off entirely.

Whilst 8GB was a surprise, I'd have preferred to see a demo of background separation :(.
 
A 3D cam of this quality should at least be good enough to scan my face with enough detail to create an avatar that has a face that looks like mine, right?
 
That would certainly provide a large amount of options:

- 'background separation' with 2 cameras. Then use the PSeye libraries to do "2.5D" skeletal analysis... should be adequate for PS4-fit/dancestar etc.

- PS move. As per the current system, but for "mirror" apps you can track the move at a high frame whilst allowing a high resolution mirror image.

- track the gamepad. The controller supposedly has roughly the same sensors as PS move, although there doesn't seem to be any example usage.

- turn the camera off entirely.

Whilst 8GB was a surprise, I'd have preferred to see a demo of background separation :(.

For background separation, I'd go one step further to try to model the background in 3D. :devilish:
 
Do you think that they will support the old controllers? No sense buying totally new controllers when all the buttons are basically the same. Its a waste really.
 
A 3D cam of this quality should at least be good enough to scan my face with enough detail to create an avatar that has a face that looks like mine, right?
I expect so. Set your face close to the cameras and they'll have plenty of detail available.
 
Back
Top