The hardware in Kinect 2.0 looks to be amazing where is the software to show it off?

[... stuff about Kinect 1, PS3, Kinect 2, thesis...]
Okay, you mix and match random sources from all over the place, on different platforms, and I'm not sure they can be compared to each other.

Yes, when they talked about Kinect-1 resolution it was a ridiculously inflated number, because the useful data is only the resolution of the pattern, not the sensor. OTOH, the new ToF camera is real resolution, that's one of the reason it's feels incredibly improved. On paper it's not as apparent, because they were being *cough* generous *cough* about Kinect-1 specs,but now they are very honest about Kinect-2 specs. It's an amazing ToF camera no matter how you measure it.

Move was also real resolution, it was even conservative because it was capable of sub-pixel accuracy, and on PS3 it was 22ms with the camera at 60Hz. So on PS4 it should be much much faster to compute (maybe 5 to 10 times), and the two cameras makes it possible to get the same performance with a quarter of the resolution, so 120Hz or 240Hz instead of 60Hz. That would be easily under 10ms total.

To do what you propose with Kinect-2, it would need to group pixels together, and I don't think there's any precedent of a ToF sensor being able to change it's sensitivity, resolution and frame rate, dynamically. Also I'm not sure how they can be significantly under 40ms with a 30fps input, but it's certainly improved enough from Kinect 1 to be almost undetectable in games. I expect the TV lag, and the rendering pipeline of the game, to be the biggest contributors to lag.

For VR though, I have doubts, if only because John Carmack said so about lag with VR. We'll see...
 
Okay, you mix and match random sources from all over the place, on different platforms, and I'm not sure they can be compared to each other.

I didn't compared them with each other (at the end of my post), I used those links as an examples (1) to show PS Eye latency (latency test) (2) to show that processing more objects increase the latency.

Move was also real resolution, it was even conservative because it was capable of sub-pixel accuracy, and on PS3 it was 22ms with the camera at 60Hz. So on PS4 it should be much much faster to compute (maybe 5 to 10 times), and the two cameras makes it possible to get the same performance with a quarter of the resolution, so 120Hz or 240Hz instead of 60Hz. That would be easily under 10ms total.

Why it should be faster on PS4 and why 5 to 10 times? it's the same mechanism, even that kickstarter project latency was around 20ms for lower than 10 objects, And two RGB camera on PS4 only could be used for stereoscopic rendering (which needs more processing than simple tracking), you can't use both of them for tracking since each of them will see movements from different angle and if you want to use both of them you need to coordinate their vision to compute correct depth/distance/movement of the controller/headset from the camera. While on PS3 measuring the depth was possible by measuring the size of the light ball from PS Eye, so it's good for them to use only one camera for VR, not both of them since they are using the same mechanism as PS Move.

One is in charge of capturing the setting and ensuring a quality picture, and the other handles motion tracking.
http://www.engadget.com/2013/02/21/sony=playstation-4-eye-works/

But with higher frame rate and same resolution (640x400p @120fps compared to 640x400p @60fps on PS Eys) PS4 camera should be faster for tracking, if they don't need higher resolution for higher precision. This should be also possible to do on Kinect's VGA camera, right? But ToF camera has lower latency for tracking.

To do what you propose with Kinect-2, it would need to group pixels together, and I don't think there's any precedent of a ToF sensor being able to change it's sensitivity, resolution and frame rate, dynamically. Also I'm not sure how they can be significantly under 40ms with a 30fps input, but it's certainly improved enough from Kinect 1 to be almost undetectable in games. I expect the TV lag, and the rendering pipeline of the game, to be the biggest contributors to lag.

Kinect can see infrared light (for example infrared LEDs on XB1 controller). So this should allow Kinect to track fewer points on the screen. So it should be possible to do same thing with VR/AR headsets. I don't know if ToF camera on Kinect being able to change its resolution or frame rate but I think it could be a possibility since Microsoft buys 3DV system, the company that makes this ToF camera with VGA, QVGA, QQVGA resolutions and up to 60fps framerate.
 
Last edited by a moderator:
Why it should be faster on PS4 and why 5 to 10 times? it's the same mechanism, even that kickstarter project latency was around 20ms for lower than 10 objects, And two RGB camera on PS4 only could be used for stereoscopic rendering (which needs more processing than simple tracking), you can't use both of them for tracking since each of them will see movements from different angle and if you want to use both of them you need to coordinate their vision to compute correct depth/distance/movement of the controller/headset from the camera. While on PS3 measuring the depth was possible by measuring the size of the light ball from PS Eye, so it's good for them to use only one camera for VR, not both of them since they are using the same mechanism as PS Move.
Whether Kinect or Move, or even Wii, there's a lag caused by the frame rate, the time to expose the sensor, then the time to transfer it out of the sensor to a frame buffer in the console. It's usually about one frame. (if it was faster they could do more frames per seconds). This lag is the same whether it tracks a hundred point or a single one.

Then there's the time required to process this information by the CPU. It goes up with the number of objects it attempts to track. With PS4/XB1 the CPU is 5 to 10 times faster than PS360, so wouldn't this lag be reduced by that much?

For Move, I was thinking that they could now use the cameras offset to triangulate the distance, the size reference is no longer the 1.5 inch width of the ball, it's the 3 inch between cameras. So that makes it half the required resolution for the same precision. So 240Hz QVGA with PS4 will be the same precision as 60Hz VGA with PS3.
Kinect can see infrared light (for example infrared LEDs on XB1 controller). So this should allow Kinect to track fewer points on the screen. So it should be possible to do same thing with VR/AR headsets. I don't know if ToF camera on Kinect being able to change its resolution or frame rate but I think it could be a possibility since Microsoft bus 3DV system, the company that makes this ToF camera with VGA, QVGA, QQVGA resolutions and up to 60fps framerate.
This was a list of 3 models, and it was "up to 60fps". The Z-Cam was 60fps at QVGA resolution. When did they make a VGA product?

I guess it all depends on whether the kinect camera can do more than 30fps. If it did, Microsoft would have said so. But even if the sensor could do 60fps, they still have the problem of illumination, they would need twice the light from those lasers to compensate, all else being equal.
 
I think even in your standard genres Kinect could bring interesting things to the table.

For example voice and hand gestures used for squad commands in Gears, Halo, COD, BF similar to last gen with Kinect letting you control squad powers in ME3, (which was faster than navigating the squad power menus and eschewed the break in the action you would get by pausing the game to issue commands).

Added efficiency would seem to be low hanging fruit but obviously it wasn't enough. Were players who were kinect equipped kicking everybody else's ass ? If there wasn't a big win in terms of demonstrable gaming performance then there wouldn't be much point. I get the "break in the action" bit but what kind of time saving are we talking about ?

Now if you could create "macros" that were triggered by user chosen gestures to mimic PC access to the keyboard that might do but the added complexity might have been problematic.

Or imagine being able to pick up and manipulate objects with your hands in the next Elder Scrolls game (e.g. so you can place items in your virtual house exactly how you want or examine all the intricacies of a weapon or artifact far more naturally). Or in other first person games, being able to interact with the world naturally (ie. pulling a lever, pushing an object, solving puzzles or entering a code with your hand rather than just pressing X to interact).

Of course NATURAL in the console space means using a controller. How much did the kinect value rely on rewiring the gamers brain and what did the user gain from it ? Using your hand for fingers when punching in codes or picking locks might not be the best mapping scheme. Even though you don't have to make major gestures they are more major than just using a stick and a button or 2. I think when it comes to progressing through stages folks just wanna keep moving unless there is a "puzzle" to be solved and maybe some 3 dimensional gesturing could make something more interesting rather than not.

Or in Star Wars games being able to use force push/pull by letting go of the controller with your right hand and making a gesture (with the direction and type of movement affecting the force power). What about force lightning where you can direct the power with your right hand.

Indeed like the Harry Potter thing but were were these demos ? What was the problem with these obvious ways of mapping gestures to such telekinetic functions ?? Aiming the reticule or if you are sport doing without the reticule was obviously either not the easiest thing to do with the kinect or didn't add sufficient value it would seem. The eye tracking Morpheus demo maybe the solution there. Looking at what you want to pull or push to would seem to be a bit more natural.

I think one issue is as you say taking your hand off of the controller and that can be it's own issue in many instances. It's adding complexity as opposed to removing it cause you have to reposition your hand and fingers on the controller when before you didn't unless you had to scratch somewhere, it's kind of a nuisance. How many times have folks used their shoulder to scratch one's face rather than move your hand from the controller :)

Yeah, I am not convinced Kinect had nothing to add to the core gaming experience, perhaps a Kinect only (or primarily) interface would have not been workable for core game genres but a controller based interface, augmented with Kinect definitely seems to have had some tantalising possiblilities (especially with the improved capabilities of Kinect v2).

Traversing a world with just the Kinect is a major thing to work around and why try to work around things when solutions that have already been mapped into people's brains and already in their hands exist. The "world" would need to change to make the kinect work as opposed to the kinect working better in the "world". If you are going to make the tail wag the dog there has to be a bigger win than MS and gamers were seeing.

With VR gestures may come back as a valuable thing and maybe Kinect just works/looks better on say a HUGE screen rather than a 40 inch one I don't know. The thing needs to sell itself as opposed to being forced on people and for gaming at this point and time it doesn't. It's a grey area because it seems like so much promise but once you get into the weeds the ethereal nature of promises becomes apparent.

Hey lots of great ideas though and I think at some point some of them will come to fruition I hope.
 
Last edited by a moderator:
Doesn't the Kinect 2 have its own cpu on board ? Seems like a lot of processing can get done before the info even leaves the Kinect itself
 
Whether Kinect or Move, or even Wii, there's a lag caused by the frame rate, the time to expose the sensor, then the time to transfer it out of the sensor to a frame buffer in the console. It's usually about one frame. (if it was faster they could do more frames per seconds). This lag is the same whether it tracks a hundred point or a single one.

Then there's the time required to process this information by the CPU. It goes up with the number of objects it attempts to track. With PS4/XB1 the CPU is 5 to 10 times faster than PS360, so wouldn't this lag be reduced by that much?

PS4/XB1 CPUs aren't 5-10 time faster than PS3/360 CPUs, you can say that about their GPUs. Kinect uses both CPU and GPGPU for skeleton tracking and other stuffs and it has it's own processor that sends a packet of data (the color image, depth data, and sound) less than 14 ms (65ms for the original Kinect) to the CPU after that It takes 20ms latency to software. Kinect sensor can take multiple exposures per frame and chose best of them and determine exposure on a per-pixel basis. Actually I watched this video and latency of RGB camera was like skeleton, orientation and muscles tracking.
http://www.eurogamer.net/articles/digitalfoundry-what-the-xbox-one-dash-means-for-gamers
http://semiaccurate.com/2013/10/15/long-look-microsofts-xbox-one-kinect-sensor/
http://semiaccurate.com/2013/10/16/xbox-ones-kinect-sensor-overcomes-problems-intelligence/

I can't speak about PS4 camera latency or software latency on PS4. Eurogamer suggest 133ms latency for PS Move on PS3 (this includes controller and camera input, processing, display of the completed frame plus the additional latency from the LCD). There is no number for PS4.
http://www.eurogamer.net/articles/playstation-move-controller-lag-analysis-blog-entry

For Move, I was thinking that they could now use the cameras offset to triangulate the distance, the size reference is no longer the 1.5 inch width of the ball, it's the 3 inch between cameras. So that makes it half the required resolution for the same precision. So 240Hz QVGA with PS4 will be the same precision as 60Hz VGA with PS3.

I don't think triangulation being a good choice for tracking VR or PS Move but it should be good for free hand gameplay since its accuracy should be lower and its latency should be higher (I'm not sure about latency).

ps4.png

This was a list of 3 models, and it was "up to 60fps". The Z-Cam was 60fps at QVGA resolution. When did they make a VGA product?

I guess it all depends on whether the kinect camera can do more than 30fps. If it did, Microsoft would have said so. But even if the sensor could do 60fps, they still have the problem of illumination, they would need twice the light from those lasers to compensate, all else being equal.

I can't find source for those information. The max sensor resolution of Zcam is 320x240 and its max framerate is 60fps.

As I said above Kinect sensor can take multiple exposures per frame and chose best of them, they gone with 30fps for it's quality. Maybe they don't need 60fps since a ToF camera needs less time for data processing.

Doesn't the Kinect 2 have its own cpu on board ? Seems like a lot of processing can get done before the info even leaves the Kinect itself

Yes and Yes.
 
Move controller latency is well documented. It ranges from 4-21ms depending on how many you track. 1 or 2 generally stay well within 1 frame at 60fps. I really, really enjoyed the Move, and really hope motion controls will return to the forefront eventually. Sports Champions and some of the other games, including playing Infamous: Festival of Blood and Killzone 3 with Move, were my favorite last gen experiences. At least I do like that the DS4 controller integrated some of the features and added the touchpad, because a surprising many games already manage to make something out of that. Spray painting in Infamous works well, would have liked to see what you can do with that with more freedom.
 
Move controller latency is well documented. It ranges from 4-21ms depending on how many you track. 1 or 2 generally stay well within 1 frame at 60fps. I really, really enjoyed the Move, and really hope motion controls will return to the forefront eventually. Sports Champions and some of the other games, including playing Infamous: Festival of Blood and Killzone 3 with Move, were my favorite last gen experiences. At least I do like that the DS4 controller integrated some of the features and added the touchpad, because a surprising many games already manage to make something out of that. Spray painting in Infamous works well, would have liked to see what you can do with that with more freedom.

I absolutely agree Arwin. Socom 4 with Move was superb for me, and Resistance 3 played with Move was equally as enjoyable. The Move controller even gave MAG a whole new lease of life with me, as I'd long since abandoned it prior to the devices launch.

I was hoping Sony could have impletemed Move controls into KS:SF. WOuld have really loved that.
 
I absolutely agree Arwin. Socom 4 with Move was superb for me, and Resistance 3 played with Move was equally as enjoyable. The Move controller even gave MAG a whole new lease of life with me, as I'd long since abandoned it prior to the devices launch.

I was hoping Sony could have impletemed Move controls into KS:SF. WOuld have really loved that.

I love them Move, Makes FPS more immersing but I have some issues with it. Why on earth do developers insist on adding melee on move swipes instead on button presses?
It really pissed me off numerous times trying to execute fast melees in tight spaces only to hurt my wrist and not be able to deliver fast enough hits. Not to mention that sometimes my melee intentions wouldnt register on screen.
Also another issue is when you are in tight spaces or you are surrounded too close by a crap load of enemies. Trying to turn accurately and fast is a pain in the ass. You end up overturning/underturning and its impossible to keep the cursor where you want it and turn at the same time because the cursor has to be pointed to the sides. But Move on long distanced enemies is simply awesome.
In addition for some peculiar reason some games tend to loose the cursor's initial calibrating point. I was calibrating the thing tenths of times in Bioshock Infinite
 
I love them Move, Makes FPS more immersing but I have some issues with it. Why on earth do developers insist on adding melee on move swipes instead on button presses?
It really pissed me off numerous times trying to execute fast melees in tight spaces only to hurt my wrist and not be able to deliver fast enough hits. Not to mention that sometimes my melee intentions wouldnt register on screen.
Also another issue is when you are in tight spaces or you are surrounded too close by a crap load of enemies. Trying to turn accurately and fast is a pain in the ass. You end up overturning/underturning and its impossible to keep the cursor where you want it and turn at the same time because the cursor has to be pointed to the sides. But Move on long distanced enemies is simply awesome.
In addition for some peculiar reason some games tend to loose the cursor's initial calibrating point. I was calibrating the thing tenths of times in Bioshock Infinite

Totally agree about the melee. Although I never had all that much issue with turning whilst aiming, especially with games like Infamous and KZ3 that gave you sufficient control over deadzone and accelleration area.
Bioshock Infinite however was badly designed for Move. It was the only game playable with the Move that I just couldn't hack, and ended up putting it down and picking up my DS3. Very badly implemented.
 
Totally agree about the melee. Although I never had all that much issue with turning whilst aiming, especially with games like Infamous and KZ3 that gave you sufficient control over deadzone and accelleration area.
Bioshock Infinite however was badly designed for Move. It was the only game playable with the Move that I just couldn't hack, and ended up putting it down and picking up my DS3. Very badly implemented.

My latest experience was with Infinite so thats probably why I feel so pissed. Its been a long time since I played KZ3 and R3 with it. But if I recall i wasnt that much frustrated with those. I think I enjoyed them a lot with Move
 
Someone at other forum give me the link

Kinect TOF is among Image Sensor technological breakthrough
Technical paper from ISSCC 2014

Kinect 2, Digiest 7.6
http://i.imgur.com/BjKOdg3.jpg
http://i.imgur.com/6bEtzrh.jpg
http://i.imgur.com/ZDQLITS.jpg
http://i.imgur.com/zyvxGX9.jpg


Interest in 3D depth cameras has been piqued by the release of the Kinect motion
sensor for the Xbox 360 gaming console [1,2,3]. This paper presents the pixel
and 2GS/s signal paths in a state-of-the-art Time-of-Flight (ToF) sensor suitable
for use in the latest Kinect sensor for Xbox One. ToF cameras determine the
distance to objects by measuring the round trip travel time of an am plitude-
modulated light from the source to the target and back to the camera at each
pixel. ToF technology provides an accurate high pixel resolution, low motion
blur, wide field of view (FoV ), high dynamic range depth image as well as an
ambient light invariant brightness image (active IR) that meets the highest
quality requirements for 3D motion detection.

Depth and active IR im ages are produced by combining multiple images that are
captured at different phase relationships of the clocks provided to the light
source and pixel array. The captures are taken in rapid temporal succession to
avoid motion blur. In addition, high differential dynamic range is necessary to
simultaneously rend er high -reflectivity objects n ear the cam era and lo w -
reflectivity objects far from the camera. High dynamic range is realized by
allowing each pixel to independently select the best shutter time ( multi-shutter)
and the best amplifier gain setting (multi-gain) at each capture.
 
That is a good idea that would work since latency/accuracy aint really that important

It could be like a gunfight harry potter style.
You ducking attacks and attacking with spells / other spells for blocks like shield

thus 2 ideas so far
#1 boob manipulator
#2 wizard battle

I believe the first version did also.
My point is, its always been thus. All the ideas dont suddenly come into birth with the creation of a new input method, theres some at the start and the rest during its lifetime.
With Kinect it was first revealed at e3 June 1 2009 (doubtless worked on a few years beforehand) So MS have been working on it 5+ years, However you look at it, thats a bloody good crack to come up with more than party/fitness/dancing games that are 90+% of whats appeared. My point that Ive been droning on for the last 5 years here is because of the limits of the tech (latency/accuracy) you are limited to what you can make for it. I know some ppl think 5 years aint enuf, give them another 5 years you'll see good stuff then I promise but really 5 yrs is a lifetime in tech
More ideas:

- Play horizontal scrollers (beat em' ups) with Kinect). You could play while sitting, and move using your fingers drawing the typical motion of two legs walking or running.

- As you say, sexy and porn games for couples would sell like hotcakes, but of course, as Shifty pointed out, how can you avoid the censorship?

Humoristic games hinting at it could make it work, like this:


It's all about the fun.

- Side scrollers where you can play sitting, punch using your hands, kicks would have to be performed bending your elbow.

Picking items up could be done stretching your arm out and clenching your fist.

Shoryukens and hadokens could be made, too.

- My dream Kinect game: :p :smile2: A celebrities game where you can see films and clips of your favourite celebrities ever, historical or modern ones, have their bios in the game, and photos. :smile2:

Have Sophie Ellis Bextor, Laura Jackson and Jill Flint there :love::love::love:, make it worthwhile, and I (along millions of others) would be in!!!

Take photos with them in films using Kinect and some smart software integration, etc etc. Let people save and upload the photos to other places.

Thanks mosen and kotakaja for the links.
 
What ever happened to the Kinect mech game?

I thought for sure we'd have seen the sequel for Kinect2, but I'm basing that on reading the reviews of the game that were pretty much summed up as:

when it works, this game is brilliant and a lot of fun. when it doesn't work, it breaks the game and ruins the immersion that kinect is attempting build.

Or things along those lines. Essentially, the Kinect wasn't accurate enough and using the wrong movement (or Kinect incorrectly "guessing" what you are trying to do), really hurt the game more than the fun that was created otherwise.

I thought Kinect2 would solve those problems, especially since I think a Mech game is a perfect game for the interface. Mechs are slow and plodding machines, so the nature of the game play should allow cover for lag issues, etc. I'm really shocked they weren't able to have a sequel available at launch to be quite frank.
 
Steel Battalion: Heavy Armor got some of the worst reviews of the Kinect titles. It has a MetaCritic score of 38. I'm not surprised it didn't get a sequel, but I was quite surprised they didn't do some kind of other Kinect sequel like Adventures, Dance Central, Kinectimals, etc.

Tommy McClain
 
Check out Crabitron Kinect coming to Xbox One via the ID@Xbox program...

Crabitron-Announce.jpg

http://twolivesleft.com/games/officially-announcing-crabitron-kinect-for-xbox-one/

TwoLivesLeft said:
Just to give you guys some background on the game, the original was on iOS and has a unique control scheme where you pinch to control a Giant Space Crab's claws. Some people at the time mentioned that it would be really cool on Xbox with the Kinect. Unfortunately there wasn't a viable route for us to bring the game to Xbox and the original Kinect was too unreliable for this type of gameplay.
Skip forward to the Kinect 2 and ID@Xbox and we had a viable route to getting the game on Xbox One. We built a prototype using the old Kinect hardware and used that to secure a place in ID@Xbox and get Dev Kits. Here is what the prototype looked like to give you an idea of the type of gameplay we are working on: Prototype Video
TLDR: Giant Space Crab Shenanigans with Kinect
http://www.reddit.com/r/xboxone/comments/2d3zic/crabitron_kinect_coming_to_xbox_one_in_2015/


Granted it's a prototype video using the original Kinect in order to secure entry into the ID@Xbox program, but I think it should play even better with Kinect 2. Can't wait to see it in action.

Tommy McClain
 
@johntwolives said:
Wrote algorithm to determine thumb and index finger angles / positions for claw gestures. @CRABITRON #Kinect #TheClaw pic.twitter.com/ZkUXTgjpyq

Bs-Xn-UCIAIl8Hv.png:large


He's also going to write a blog series on his road to Xbox One & on the algorithm he created for the claw.

Tommy McClain
 
Back
Top