Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
The absolute best it could possibly be with a 60Hz camera is worse than 33ms.

It has to capture the frame (1/60th) transmit it over whatever link is there, process the data give the data to the game, the game has the then render the frame and swap the buffer (1/60th) plus that front buffer has to be transferred to your TV and displayed (1/60th best case) before you see the result.

The first 1/60th is really only 1/2 that on average.
If you have an analog connection to the TV and it's not buffering the data (an old CRT) technically you get to see it as it's drawn so you only have 1/2 the 1/60th on average for the last 1/60th.

The processing probably isn't a significant part of the time, the best places to save time are higher camera framerates, and a faster link to the console, but you pretty quickly hit the limitations there.

Thanks. .so your saying with camera limitations at 60hz that is not far off what can be achieved. .right...so what stops them using a 120hz camera...apart from cost?
 
Thanks. .so your saying with camera limitations at 60hz that is not far off what can be achieved. .right...so what stops them using a 120hz camera...apart from cost?

Processing? You'd have to process four times as many frames than they do at 30 Hz. Kinect seems to be processing intensive.
 
Processing? You'd have to process four times as many frames than they do at 30 Hz. Kinect seems to be processing intensive.

It seems a netbook processor might not have been the wisest move considering the processing necessary to realize kinect interface's potential. Not to mention the processing demands of nextgen...

But at least they have processing headroom on their monster GPU with CU's to spare that they can off load to ...


...
 
From the leak:

I think the question is what is being processed in that measurement.

Is it full skeletal + rgb, or just depth cam?

If it's just the depth cam, ubisoft was able to get fairly lag free performance already from kinect 1.0 by using that shortcut...
 
It seems a netbook processor might not have been the wisest move considering the processing necessary to realize kinect interface's potential. Not to mention the processing demands of nextgen...

What about the helper processors? there are some of them.

But at least they have processing headroom on their monster GPU with CU's to spare that they can off load to ...


...
Sorry but I think your comment is not pro active. The sarcasm sometimes is not good for a discussion.
 
Thanks. .so your saying with camera limitations at 60hz that is not far off what can be achieved. .right...so what stops them using a 120hz camera...apart from cost?

Well i think by cost, it could mean that the entire kinect processing pipeline would need to increase (double?) in order to handle a 120hz camera wouldn't it?

EDIT:Scott beat me to it...
 
It seems a netbook processor might not have been the wisest move considering the processing necessary to realize kinect interface's potential. Not to mention the processing demands of nextgen...

But at least they have processing headroom on their monster GPU with CU's to spare that they can off load to ...


...

There are some leaps in logic here that I'm not understanding. Why do you think Durango does not have enough processing power to handle the Kinect unit they are going to bundle with it, after all of the prolonged hypothesis about the reserved CPU cores and 3 GB of RAM?
 
Well i think by cost, it could mean that the entire kinect processing pipeline would need to increase (double?) in order to handle a 120hz camera wouldn't it?

EDIT:Scott beat me to it...

Cost, there would also be issue with low light sensitivity, you'd need to double the speed of the actual link to the camera to offset the increase in data which may well be impossible given the interface involved (I would guess USB3).
 
Framerate is 30fps for 1080p resolution (RGB camera), but we don't know the framerate at lower resolutions (for RGB) or the framerate for depth depth/IR stream.

EDIT: What is the difference between depth stream and infrared stream?
 
Oh cool! A device sold in any box and used by default that will barely work in my room setup...

With the wider FOV (assuming both vertical as well as horizontal), you presumably wouldn't need to stand as far from Kinect 2.0 as you did for Kinect 1.0. So it is entirely possible that it might be useable in smaller spaces than is currently possible with Kinect 1.0.

Regards,
SB
 
Stupid question does anyone know, or rather is anyone willing to say if MS is requiring Win8 for the Durango development environment?
Or is Win7 still supported?
 
There are some leaps in logic here that I'm not understanding. Why do you think Durango does not have enough processing power to handle the Kinect unit they are going to bundle with it, after all of the prolonged hypothesis about the reserved CPU cores and 3 GB of RAM?

Oh I'm sure it can process kinect 2.0

Just like xbox360 can process kinect 1.0

That doesn't mean it is doing it optimally (lag-free finger tracking) or with enough processing left over for nextgen games which are on par with the competition ...

Point is with more processing power, it could have done better. Sure there are limits in TDP and die size, but I suspect they are somewhere north of a netbook processor and south of 250mm2@28nm.

That's a lot of wiggle room to find processing power which can accommodate the "killer app" of their new system.
 
But kinect is all about software, current Kinect is better today than 2010 thanks to software (algoritms?) evolution. It is not only hardware.
 
I think the question is what is being processed in that measurement.

Is it full skeletal + rgb, or just depth cam?

If it's just the depth cam, ubisoft was able to get fairly lag free performance already from kinect 1.0 by using that shortcut...

Peobaly that's for the whole skeletal tracking. Even on kinect 1 that's not very huge, most if not all games that have too much delay is due filtering of the data. Kinect is not very precise because the same joint varies a lot in time, so you have to filter that data (which adds lag) if you want a character on screen mimicking the player moves, instead of using data direct from the tracking. The leak also says precision of the sensor is much improved (tof camera perhaps?) which should also play a whole in reducing perceived lag.
 
Oh I'm sure it can process kinect 2.0

Just like xbox360 can process kinect 1.0

That doesn't mean it is doing it optimally (lag-free finger tracking) or with enough processing left over for nextgen games which are on par with the competition ...

Point is with more processing power, it could have done better. Sure there are limits in TDP and die size, but I suspect they are somewhere north of a netbook processor and south of 250mm2@28nm.

That's a lot of wiggle room to find processing power which can accommodate the "killer app" of their new system.

You and I have no idea how much processing power this thing requires, so we have no idea what's "left over." Is "lag-free" finger tracking really the bar that is set?

Kinect 1 used a fraction of the processing power of Xbox 360. I'm guessing Kinect 2 is going to use an even smaller fraction of the processing power of Xbox "720." Just a guess, but I think it's probably reasonable considering the moderate increase to the Kinect specs relative to the rumoured specs for Durango.

Like all things, it's best to reserved judgement until you actually use something.
 
From vglelaks:

New features:

  • Detection of hand states, for example, open or closed hands.
  • Detection of extra joints, and rotations for some joints.
Impoved features:

  • Tracking of six, rather than two, active players.
  • Tracking of occluded joints, for example, an elbow occluded by a hand.
  • Detection of joint positions.
  • Detection of sideways poses.
I think it is a good improvement, 3x tracking.
What if a game with only two active players (not six) can track more things, as fingers?
 
From vglelaks:

I think it is a good improvement, 3x tracking.
What if a game with only two active players (not six) can track more things, as fingers?

That depends on whether the bottleneck for having a full skeletal tracking + fingers is hold back by processing requirements or by memory requirements.

Bklian said an extra joint on the skeleton makes the memory requirement of the database go higher exponentially, so it could be too much impactful to have it enabled at all times...

The good news is that from kotaku leak, kinect will track the player's thumbs at any play distance, so if there is enough resolution in the depth image, enabling a special mode for the titles that require such functionality should not be an issue... Considering that when you are using something as fine as finger tracking you probably won't be needing more "grotesque" actions like full body motion, finger tracking could still be available decoupled of the main skeletal, like an upper body tracking or so, to keep the memory requirements in place.
 
Oh I'm sure it can process kinect 2.0

Just like xbox360 can process kinect 1.0

That doesn't mean it is doing it optimally (lag-free finger tracking) or with enough processing left over for nextgen games which are on par with the competition ...

Point is with more processing power, it could have done better. Sure there are limits in TDP and die size, but I suspect they are somewhere north of a netbook processor and south of 250mm2@28nm.

That's a lot of wiggle room to find processing power which can accommodate the "killer app" of their new system.
Really? You think they can process faster? An excellent controller-based game can hope for button-press to action on screen of 3 frames, 50ms. This supposed kinect goes from motion to reaction on screen in ~60ms (4 frames). Considering it's probably a 30fps camera, you're talking 33ms to get the frame plus 2 frames. In other words, the processing time from receipt of action to stuff on screen is the same as a hard controller, 2 frames. What that says to me is processing capacity is not the bottleneck.
 
Finger tracking can be done on the RGB (YUV :p) feed. The skeletal tracking will help direct the hand tracking, I imagine. and unless someone's really wanting a virtual air-piano game, is perfect finger tracking really needed? honestly? Open hand, closed hand, fist, spread fingers, cupped hands, pointing finger, and some gestures for interfacing. A completely virtual hand with 1:1 tracking of the player carefully picking up small beads and rolling them down their hand into a jar is just overkill. The most immersive sorts of games like Datura and Heavy Rain can be experienced very naturally with course gestures rather than finger tracking.
 
Status
Not open for further replies.
Back
Top