The hardware in Kinect 2.0 looks to be amazing where is the software to show it off?

The problem is that MS themselves haven't figured out Kinect in the gamespace. Utility applications like medicine, searches and voice commands sure. Beyond that they themselves don't really have any idea how to make it work either.
 
The notion that ppl are entrenched in the standard way of controlling things?
Please read arguments properly if you don't want to come across a chump. I was talking about console/mainstream professional devs.

Why did the waggle wand succeed
Nintendo invested in it, because it was the only control scheme on their device.
Why did the finger touch screen devices succeed
Independent developers invested in it, because it was the primary control scheme on that device. Also, look how many different games it took to develop the ideas. Look how new game ideas are coming out (Blek, Threes!) despite touch input being around for years, because it took people time to experiment and rearrange their brains to come up with new concepts, and with each new concept inspiring the next wave of ideas.
Why did the accelarometer devices succeed
Independent developers invested in it. Look at how the accelerometers in sixaxis didn't do anything by comparison. IIRC there was only one tilting game, the rubber ducky game, which wasn't a particularly great game, and a few useless add-ons like tilt to balance in Uncharted despite balance as a mechanic being dependent on one's balance sense and just being awkward when you can't feel how off-balance you are.

Im still waiting for these game ideas that can save kinect
If you followed the discussions on this forum about alternative input methods, you'd have heard plenty of ideas.
..., come on people you can resurrect the product.
Not if MS pull it from the product...
Heres one I just thought of 'boob fondler' - Youve gotta manipulate the onscreen ladies breasts getting her excitement levels to a limit:
Sure to be a hit :)
That's just as suited to a touch interface. It's also unlikely to pass console QA.
 
That's just as suited to a touch interface. It's also unlikely to pass console QA.

Unless its SCEJ :devilish:

On a more serious note though, I don't believe that Kinect's failure to garner any meaningful development support on XB1 is anyone's fault but the device itself.

It's easy enough to say that clever game design can design around its limitations, but that's not strictly true for the majority of currently popular gametypes. I mean, you only need to look at the current game development landscape to understand that the prevailing majority of devs aren't making games anymore off the back of some overarching and meaningful creative vision. The majority of modern published games are made to follow the current trends, popular genres and game mechanics, in order to sate the esires of the core gamer and generate as much income for the dev/pub as possible.

Kinect is simply an anathema to that strategy, as most if not all the popular game genres simply don't work with it. And so, any meaningful game design that requires even an ounce of creativity to develop (and develop well) instantly disqualifies the majority of console game developers in the industry today; not through a lack of creative ability, but through a lack of any desire to try, due to a prevailing belief that the greater gaming audience is looking for the next COD, Mass Effect and GTA not the next Kinect animal petter, boob fondler or sergeon simulator.

Imho, Kinect has no business trying to masquerade as a videogame input device. It shows far more potential as a control interface for dedication non-gaming applications.
 
Maybe console devs just can't figure out new ideas suitable to kinect, or are just not willing to take the risk. On Steam developers always try new things which leads to cool funky and fresh games like The Stanley Parable, Gunpoint, Gone Home, etc, whereas looking at current console titles to me they mostly look like rehashes of the same old stuff. It's status quo all over again, with fresh graphics. Who knows maybe that's all they know how to do, bring our revision 3, 4 or 5 of existing titles with new graphics. I think one of MS's biggest mistakes in that regard was not fully opening up kinect to indies from way before launch. Indies may have ended up with far better game ideas than the AAA studios could have. To me anyways it seems like indies is where the new ideas I play are coming from, whereas AAA studios so far just seem to want to reuse old ideas to push new visual tech.

Or worded a much simpler way, maybe kinect was just a poor fit for typical console developers.

I think the problem is that its very practical for some genres and ideas and completely impractical for many others regardless of how good the technology is.
It cant replace previous and familiar experiences without sacrificing something that worked very well before. Its a completely new experience for different kind of games that can only appeal to different kind of people.
People who bought the XB1 are people who bought it for a certain experience, Shoving Kinect into the experience and expect those people to accept it when the thing is "incompatible" with the experience is bad business practice. On the other hand those that may find appeal in Kinect arent interested in the console or in a console. They werent intrigued. Its partially their "fault" for having a particular perception about consoles, partly the limitations of the control method (which IMO it limits what can be done in gameplay rather than frees.) and partly MS's. You cant blame the "core" gamer for wanting what he experienced and liked in the past to get better with the next gen console. They bought the 360 for certain reasons and they wanted an XB1 because these positive expectations based on past experience were carried over. Its the legacy that keeps people wanting your product.
One of the reasons Wii succeeded was because it started fresh without mixing expectations and needs of the core with those of the casuals. It didnt target high on specs and gave to the other market exactly what they wanted. A new, very approachable experience at the right price. And that was a one off success
 
It's easy enough to say that clever game design can design around its limitations, but that's not strictly true for the majority of currently popular gametypes.
Right. Developers still producing the same game types won't find a way ot make Kinect work for them. However, that's the problem. Like the mouse - it took a while for developers to develop mouse-based games. Lemmings was only possible due to the mouse, yet it didn't appear until 6 years after the Amiga's release during which time devs were making joystick based games. It took 6 years from the possiblilty of that game control and format before someone made it happen.

I mean, you only need to look at the current game development landscape to understand that the prevailing majority of devs aren't making games anymore off the back of some overarching and meaningful creative vision. The majority of modern published games are made to follow the current trends, popular genres and game mechanics, in order to sate the esires of the core gamer and generate as much income for the dev/pub as possible.
That's what I mean by lack of courage. They are happy to go with the tried and tested rather than something new. However, I'm talking as much about Kinect augmented games as Kinect-exclusive games. The lack of head tracking to use the autonomous head movements of so many games is pretty mind-boggling to me. Since PS1 days I recall myself and others craning up to look over a hill in a driving game, yet no-one's got a decent implementation of that in game. When was the impressive head-tracking YouTube vid that got us all excited? I think this one from 6 years ago:


Why doesn't that input feature in any games? I recall one, lousy, unnatural implementation of head-tracking in GT.

Imho, Kinect has no business trying to masquerade as a videogame input device. It shows far more potential as a control interface for dedication non-gaming applications.
What defines a videogame input device? Is your keyboard, designed for work and typing, not a videogame input device? I'm sure many an early dev said the mouse shouldn't have aspirations to be a game controller as well, during which time the prevailing game experiences were platformers and beat-'em-ups and side-scrollers. Devs use the input devices as they fit to make games. There's no such thing as a videogame interface, save perhaps a definition of a control format that's good for little else. Pioneering devs create new experiences. they did it before, and they're doing it now (mobile), and they'll do it again once motion input is ubiquitous and the barrier to entry is low enough. But the same ideas form indies could come from established devs if only they weren't so conservative.
 
Right. Developers still producing the same game types won't find a way ot make Kinect work for them. However, that's the problem. Like the mouse - it took a while for developers to develop mouse-based games. Lemmings was only possible due to the mouse, yet it didn't appear until 6 years after the Amiga's release during which time devs were making joystick based games. It took 6 years from the possiblilty of that game control and format before someone made it happen.

Agreed, to see meaningful usage cases to convince gamers of the value of Kinect as a game controller takes both time and effort. But that is something neither third party developers nor (evidently) microsft were prepared to give to it. It also needs to be a standard input device, which was true for the mouse on PC, but only halfway the case for Kinect on XB1 given the XB1 wasn't the only videogames console available on the market and Sony's console (thankfully) wasn't shipped with any reasonable analogue.

That's what I mean by lack of courage. They are happy to go with the tried and tested rather than something new. However, I'm talking as much about Kinect augmented games as Kinect-exclusive games. The lack of head tracking to use the autonomous head movements of so many games is pretty mind-boggling to me. Since PS1 days I recall myself and others craning up to look over a hill in a driving game, yet no-one's got a decent implementation of that in game. When was the impressive head-tracking YouTube vid that got us all excited? I think this one from 6 years ago:

Why doesn't that input feature in any games? I recall one, lousy, unnatural implementation of head-tracking in GT.

Whilst I agree that devs could have done some very interesting things with Kinect as a device to augment traditional controller-based games, there's really next to no incentive for devs to brainstorm, test and implement these ideas unless they are directly paid my MS to do this, or are one of MS' own first party devs in the first place. MS should have ensured that every XB1 launch game launched with some meaningful Kinect augmentation. Instead they thought they could sell the value of the $100 pack-in with just OS voice controls skype functrionality, both of which clearly don't require a device as expensive as Kinect to do.

What defines a videogame input device? Is your keyboard, designed for work and typing, not a videogame input device? I'm sure many an early dev said the mouse shouldn't have aspirations to be a game controller as well, during which time the prevailing game experiences were platformers and beat-'em-ups and side-scrollers. Devs use the input devices as they fit to make games. There's no such thing as a videogame interface, save perhaps a definition of a control format that's good for little else. Pioneering devs create new experiences. they did it before, and they're doing it now (mobile), and they'll do it again once motion input is ubiquitous and the barrier to entry is low enough. But the same ideas form indies could come from established devs if only they weren't so conservative.

I would argue that a controller, a joystick, a racing wheel, Move/Wii-mote controller are all videogame input devices. Since they are clearly all devices designed and dedicated to specific gaming control uses, and practically all end up being pretty useless for cases outside that (e.g. trying to use a web browser with a DS3).

Kinect is more of a multipurpose interface controller, with actually quite a limited applicability in games since it's glaring flaws (both implementational and conceptual) make it damn near useless for all but the majority of existing game types, genres and design mechanics.

Almost every successful controller in the history of gaming has proven both functional and reliable in terms of input for the majority of existing games. The mouse, controlpad, and dual-stick controller being solid examples that took over almost entirely from the wheel and joystick controllers of yesteryears. Kinect is the total opposite. It's a solution looking for a problem to solve, which is almost entirely non-applicable for existing game input use cases. Hence why IMHO it would never be accepted as a standard input device on a console. Even MS understood that if they had shipped the XB1 without the controller and just with Kinect, it would have been DOA (in every country, not just outside the US), and yet that is the only way it could have seen devs forced into thinking up new and creative ways to use it as a control device.

Really I think that if game arcades still existed and hadn't died when they did, a kincet unit embeded in an arcade unit would have been a better platform for pushing it as a gaming device. Unfortunately for MS that wasn't an option on the table for them.
 
Last edited by a moderator:
Whilst I agree that devs could have done some very interesting things with Kinect as a device to augment traditional controller-based games, there's really next to no incentive for devs to brainstorm, test and implement these ideas unless they are directly paid my MS to do this, or are one of MS' own first party devs in the first place.
I don't disagree. I'm not even arguing that devs should have supported it. Only that Kinect in itself isn't flawed tech as Zed suggested. For the right games, it's fabulous. Sadly, no-one created those games.

I'm reminded of another original, fabulous idea that never got off the ground which was a Wii game for children where the Wiimote was put inside a teddybear and the teddy moved to control the character. Kinect could do amazing things with puppetry and AR. For RPG games too, it (and face recognition) would have been incredible in requiring the player to actually play a role, including expressions and voice mannerisms. EG recently had an article on a pen-and-paper DnD game where the journalist founf himself playing a role and the fun emergent gameplay that came of that, instead of the videogame implementation of an RPG where you play the same action-based identity just dressed in different garb and skills. I'm also impressed by some of the latest social mobile games that are clearly going to overtake Nintendo's niche, which would be great between Kinect connected consoles, but of course for which there are too few connected Kinect consoles to justify such games. If Kinect was in every home by default, like mobile devices, it'd spawn a whole host of new games for which it's ideally suited. I suppose the next step for it will be in Facebook's new OR based VR universe as the control scheme for VR avatars.
 
Right. Developers still producing the same game types won't find a way ot make Kinect work for them. However, that's the problem. Like the mouse - it took a while for developers to develop mouse-based games. Lemmings was only possible due to the mouse, yet it didn't appear until 6 years after the Amiga's release during which time devs were making joystick based games. It took 6 years from the possiblilty of that game control and format before someone made it happen.

That's what I mean by lack of courage. They are happy to go with the tried and tested rather than something new. However, I'm talking as much about Kinect augmented games as Kinect-exclusive games. The lack of head tracking to use the autonomous head movements of so many games is pretty mind-boggling to me. Since PS1 days I recall myself and others craning up to look over a hill in a driving game, yet no-one's got a decent implementation of that in game. When was the impressive head-tracking YouTube vid that got us all excited? I think this one from 6 years ago:


Why doesn't that input feature in any games? I recall one, lousy, unnatural implementation of head-tracking in GT.

What defines a videogame input device? Is your keyboard, designed for work and typing, not a videogame input device? I'm sure many an early dev said the mouse shouldn't have aspirations to be a game controller as well, during which time the prevailing game experiences were platformers and beat-'em-ups and side-scrollers. Devs use the input devices as they fit to make games. There's no such thing as a videogame interface, save perhaps a definition of a control format that's good for little else. Pioneering devs create new experiences. they did it before, and they're doing it now (mobile), and they'll do it again once motion input is ubiquitous and the barrier to entry is low enough. But the same ideas form indies could come from established devs if only they weren't so conservative.


The fact that Kinect is still 30 frames per second is probably a big reason why it's not being used for head tracking.

It seemed so strange to me that Microsoft would do all that research for Kinect but still make the choice to limit motion tracking to 30 FPS, to me this says that for Microsoft gaming isn't the real focus of Kinect.
 
The notion that ppl are entrenched in the standard way of controlling things?

Why did the waggle wand succeed

How relevant is waggle now?

Why did the finger touch screen devices succeed?

When talking mobile gaming apps why are names like Zynga, Rovio and King more relevant than EA and Activision? Ubisoft is probably the only major pub that embraced iOS and Android in its early days and thats through its Gameloft arm. A company that basically offered ripoffs of popular console IPs in the early days of iphones and google phones.

Why did the accelarometer devices succeed?

Not because of well established publishers.

These are all recent new input methods that found instant success, thus the idea ppl/companies wont accept new input methods is false.

Outside of Nintendo, big pubs had little to do with the initial success of the techs you are talking about. Futhermore, Nintendo's inability to build upon the success of WiiSports and WiiFit is the reason the Wii sales fell off the cliff and why waggle is not a focal point of the WiiU.

We are talking about a sequel driven industry where you hardly can get pubs to offer new IPs, never mind ones based on unproven technology with limited evidence of driving game sales.
 
The notion that ppl are entrenched in the standard way of controlling things?
Why did the waggle wand succeed

Because there's absolutely nothing serious about Nintendo so the lack of accuracy and precision didn't matter. It didn't matter if it resulted in a 6 year old (or 60 year old) playing for the first time beating a 20-30 something who spends hours playing the game every day.

What mattered was that 6 and 60 year olds were now able to share an experience with that gamer.

Why did the finger touch screen devices succeed

Because touch screens are incredibly precise and accurate. Closer to a mouse and keyboard than a controller. The biggest rift in "gaming" is between Consoles VS PCs and it all comes down to accuracy and precision (KB/M vs controllers) because the lack of accuracy inherent in controllers leads to an ease of access of gaming - and therefore leads to a smaller learning curve and (it is argued) a reduction in the ability to truly "master" the craft.

Why did the accelarometer devices succeed

I know nothing of these devices you speak of. The only one I'm aware of is the one included in the SixAxis (or whatever it is called these days), which is rarely, if ever used and would hardly be considered a success.

Adding a device to a gaming console that lacks precision and increases the randomness of events is never going to be accepted in any "serious" application. The larger the room for uncertainty the less that expertise factors into the equation.

Which is why the Kinect has only found to be an acceptable input device for "party" games and as TV remote control when the time between wanting to do something - interfacing with the device - the device doing what you wanted doesn't matter.
 
The fact that Kinect is still 30 frames per second is probably a big reason why it's not being used for head tracking.

It seemed so strange to me that Microsoft would do all that research for Kinect but still make the choice to limit motion tracking to 30 FPS, to me this says that for Microsoft gaming isn't the real focus of Kinect.
It's still a high frame rate for a ToF camera sporting such a resolution. There's a lot of companies making ToF cameras and they are all bound to the same limitations. There is probably a limit caused by light requirement. The higher the frame rate, the stronger the illumination have to be. The higher the resolution, the smaller the cells are, so this also needs even more illumination. For 30fps Microsoft needed 3 high power lasers. I can only imagine the death ray needed for 120fps :LOL:
 
It's still a high frame rate for a ToF camera sporting such a resolution. There's a lot of companies making ToF cameras and they are all bound to the same limitations. There is probably a limit caused by light requirement. The higher the frame rate, the stronger the illumination have to be. The higher the resolution, the smaller the cells are, so this also needs even more illumination. For 30fps Microsoft needed 3 high power lasers. I can only imagine the death ray needed for 120fps :LOL:

True but also Canesta ( company that Microsoft bought) already had a 60FPS ToF sensor.
 
True but also Canesta ( company that Microsoft bought) already had a 60FPS ToF sensor.
They do, but Canesta is 320x240 (I think?). At this resolution some of them go as high as 100fps, and then there are other companies making 1280x900 sensors at 15fps. High frame rate AND high resolution seems to be problematic for ToF cameras.

For $250 retail, the SoftKinetics's DepthSense camera is 60fps, 320x240 depth, 720p color, but it's range is only 1m. To me, that indicates a problem with illumination power being costly. It's practically the same technology as Kinect-2, including a diffused pulsed laser... but built with off-the-shelf chipset/sensor from Texas Instruments (or the opposite? I think SoftKinetics developped it and TI is making it or something).

EDIT: good summary on the TI website... http://www.ti.com/ww/en/analog/3dtof/index.shtml?DCMP=hpa_contributed_article&HQS=3dtof-ca
They claim the frame rate (scanning speed) is limited by the sensor speed. Maybe it's as simple as scanning more pixels takes more time.
 
Last edited by a moderator:
I think the kinect ended up in it's own version of an uncanny valley. It worked fine most of the time and in crappy lighting situations (can't overstated how important that could be ) but for the extra cost it should have just worked even if the consumer wasn't calibrating well enough. It wasn't crazy accurate and responsive and while that wasn't all that needed for many games I think something astoundingly good at tracking you would have impressed folks and made the premium seemed justified.

As a touchless way to move around a screen it was fine ( doctors in sterile environment )and as a feedback device it was good enough ( see dance and exercise, physical therapy ) but gameplay based solely on that doesn't give you a lot of options. If reasonably precise gesticulations can be turned into a game ( I suggested a Wizard Sim where movements and chanting could replicate some sorcerers conjuring ) then great but I don't know what the audience would be. If you could turbocharge your actions in a first person shooter with added movements that might have worked but then hands off of the controller can be a problem under those circumstances.

And forget about trying to make a game scarier by monitoring heart rates, can you say heart attack and lawsuit ??
 
I was talking about console/mainstream professional devs.
Plenty of console developers are small indie teams, are they conservative?
Is Thatgamecompany conservative. I may be wrong but dont their 3 games flow,flower(*),journey all use the accelerometer?

(*)Currently the highest rated ps4 game & it uses the accelerometer who'ld thought! ;)
http://au.ign.com/articles/2013/11/13/flower-ps4-review
Best of all, though, is the motion controls: Dual Shock 4 is a huge step ahead of its predecessor. With lighter gestures you can perform more precise movements, making it easier and more enjoyable to turn on a dime and collect more petals.

it took a while for developers to develop mouse-based games. Lemmings was only possible due to the mouse, yet it didn't appear until 6 years after the Amiga's release during which time devs were making joystick based games. It took 6 years from the possiblilty of that game control and format before someone made it happen.
Its possible to play that on a joystick/keyboard (like missile command) OK the experience is not as good, I do hear what you're saying though.
My counter argument is sometimes it takes time for game ideas to originate even if theyre really basic (ahem flappy birds).
Example = Most played game of all time. Tetris
Absolutely nothing WRT input / hardware was stopping someone making that game in the 1970s, yet it took until 1984 before someone did.

I know nothing of these devices you speak of.
(accelerometer) All phones/tablets have them inside them, in my game 'rolly bolly' that function talked about its the only method of controlling the game (nowadays these devices often include a gyroscope as well for more accuracy)
 
I suggested a Wizard Sim where movements and chanting could replicate some sorcerers conjuring
Arx Fatalis used mouse gestures to create spells, it was an absolutely terrible idea even when it worked

Example = Most played game of all time. Tetris
Absolutely nothing WRT input / hardware was stopping someone making that game in the 1970s, yet it took until 1984 before someone did.
Tetris used a keyboard (at least the version I played)
 
Originally Posted by zupallinere
I suggested a Wizard Sim where movements and chanting could replicate some sorcerers conjuring
That is a good idea that would work since latency/accuracy aint really that important

It could be like a gunfight harry potter style.
You ducking attacks and attacking with spells / other spells for blocks like shield

thus 2 ideas so far
#1 boob manipulator
#2 wizard battle

Tetris used a keyboard (at least the version I played)
I believe the first version did also.
My point is, its always been thus. All the ideas dont suddenly come into birth with the creation of a new input method, theres some at the start and the rest during its lifetime.
With Kinect it was first revealed at e3 June 1 2009 (doubtless worked on a few years beforehand) So MS have been working on it 5+ years, However you look at it, thats a bloody good crack to come up with more than party/fitness/dancing games that are 90+% of whats appeared. My point that Ive been droning on for the last 5 years here is because of the limits of the tech (latency/accuracy) you are limited to what you can make for it. I know some ppl think 5 years aint enuf, give them another 5 years you'll see good stuff then I promise but really 5 yrs is a lifetime in tech
 
Kinect for windows (Kinect 1/360) could support this modes (with PrimeSense PSDK 5.0):

Image output mode for the color/grayscale image Possible values are: SXGA_15Hz (1): 1280x1024@15Hz
VGA_30Hz (2): 640x480@30Hz
VGA_25Hz (3): 640x480@25Hz
QVGA_25Hz (4): 320x240@25Hz
QVGA_30Hz (5): 320x240@30Hz
QVGA_60Hz (6): 320x240@60Hz
QQVGA_25Hz (7): 160x120@25Hz
QQVGA_30Hz (8): 160x120@30Hz
QQVGA_60Hz (9): 160x120@60Hz

depth output mode Possible values are:
SXGA_15Hz (1): 1280x1024@15Hz
VGA_30Hz (2): 640x480@30Hz
VGA_25Hz (3): 640x480@25Hz
QVGA_25Hz (4): 320x240@25Hz
QVGA_30Hz (5): 320x240@30Hz
QVGA_60Hz (6): 320x240@60Hz
QQVGA_25Hz (7): 160x120@25Hz
QQVGA_30Hz (8): 160x120@30Hz
QQVGA_60Hz (9): 160x120@60Hz
http://wiki.ros.org/openni_camera
http://www.mathworks.com/hardware-support/kinect-simulink.html?sec=start

PS4 camera supports three mode (for now):

1)1280x800p @60fps
2)640x400p @120fps
3)320x192p @240fps

So is it possible for Kinect 2 to support this modes?

1)VGA (1920x1080p @30fps) | ToF (512×424p @30fps)
1)VGA (960x720p @60fps) | ToF (256×212p @60fps)
3)VGA (480x360p @120fps) | ToF (256×212p @120fps)
4)VGA (240x180 @240fps) | ToF (128×106p @240fps)

According to this article the effective resolution of Kinect 2 depth camera is ~10 times higher than Kinect 1.

The bottom line is that in Kinect1, the depth camera’s nominal resolution is a poor indicator of its effective resolution. Roughly estimating, only around 1 in every 20 pixels has a real depth measurement in typical situations. This is the reason Kinect1 has trouble detecting small objects, such as finger tips pointing directly at the camera. There’s a good chance a small object will fall entirely between light dots, and therefore not contribute anything to the final depth image.
This is Kinect 1's actual resolution:
Slide3.gif

In a time-of-flight depth camera, the depth camera is a real camera (with a single real lens), with every pixel containing a real depth measurement. This means that, while the nominal resolution of Kinect2′s depth camera is lower than Kinect1′s, its effective resolution is likely much higher, potentially by a factor of ten or so. Time-of-flight depth cameras have their own set of issues, so I’ll have to hold off on making an absolute statement until I can test a Kinect2, but I am expecting much more detailed depth images, and if early leaked depth images (see Figure 2) are not doctored, then that’s supported by evidence.
And this is Microsoft's claim that Kinect 2 is 10 time more accurate than Kinect 1. So if they could reach to ~128×106p @240fps on ToF camera the quality of depth image should be comparable to Kinect 1 BUT at 240fps.

Also I think ~60ms for end to end processing on Kinect 2 is a direct result of amount of processing that it needs for 24x6 points to track (and other stuff that it tracks simultaneously) so by reducing the processing it should be a good companion for AR or VR headsets. for example you can read this:

Charmed Labs and Carnegie Mellon says:

Hi Joseph,
The lag time (latency) is between 20 and 40 ms depending on the number of
objects being tracked. Less than 10 and the latency is very close to 20
ms. The latency is the amount of time between the event happening (ball
moves) and the updated information appears on the communication port.

thanks for your interest!
--rich
http://www.mtbs3d.com/phpBB/viewtopic.php?f=138&t=18610#p143312

Answer is from Carnegie Mellon and it's his Kickstarter project page:
https://www.kickstarter.com/projects/254449872/pixy-cmucam5-a-fast-easy-to-use-vision-sensor

There is a thesis that uses PS Eye and reports +68ms latency for tracking up to 5 controller (PS Move) in real time:
http://thp.io/2012/thesis/

This is the last research that I found which tested PS Eye latency (without much processing), the average result for all conditions is ~1.5 frame time:
http://codelaboratories.com/forums/viewthread/129/P10/#357

So I think Kinect 2 isn't that bad for VR/AR that some of you think.
 
Last edited by a moderator:
I think even in your standard genres Kinect could bring interesting things to the table.

For example voice and hand gestures used for squad commands in Gears, Halo, COD, BF similar to last gen with Kinect letting you control squad powers in ME3, (which was faster than navigating the squad power menus and eschewed the break in the action you would get by pausing the game to issue commands).
Using Kinect for dragon shouts in Skyrim was also a great idea.

Or imagine being able to pick up and manipulate objects with your hands in the next Elder Scrolls game (e.g. so you can place items in your virtual house exactly how you want or examine all the intricacies of a weapon or artifact far more naturally). Or in other first person games, being able to interact with the world naturally (ie. pulling a lever, pushing an object, solving puzzles or entering a code with your hand rather than just pressing X to interact).

Or in Star Wars games being able to use force push/pull by letting go of the controller with your right hand and making a gesture (with the direction and type of movement affecting the force power). What about force lightning where you can direct the power with your right hand.

Yeah, I am not convinced Kinect had nothing to add to the core gaming experience, perhaps a Kinect only (or primarily) interface would have not been workable for core game genres but a controller based interface, augmented with Kinect definitely seems to have had some tantalising possiblilities (especially with the improved capabilities of Kinect v2).

But we're not going to see any of that now MS has kiboshed Kinect and relegated it to being just another unnecessary peripheral.
 
Back
Top