Old Discussion Thread for all 3 motion controllers

Status
Not open for further replies.
I'm really curious, which one of you here actually got what you were hoping for after hearing zdepth camera rumors?
Me! I thought it would be useless for games I would want to play before the E3 demos and I still think it's useless for games I would want to play now ... Tycho says it best :

"Absent anything beyond minigames and puppet shows, I don't know how to contextualize this technology. I can't be certain that it has ramifications of any kind for the games I like to play, the ones my friends like to play, or for the games that built this industry."

I don't see how you could have expected them to do anything more with it than they did ... there is nothing more they can do.

Nice bit of kit though, will be fun to see what people hijack it for.
 
Having not worked in this field or anything like, you're obviously way more clued up than me and i can only go by what I hear from other sources. In this case, why is "In The Movies" so poor and why do the EyeToy people asy it can't be done? Are they missing something?

This was a long time ago, and I didn't work on the motion detection stuff directly (I was doing CV on static images), so I would take this with a pinch of the hazy memory salt :smile:

I probably didn't clarify that I was talking about two very different techniques - background removal (In The Movies, etc), and the motion calculation technique with the name I forget (gait recognition).

As I understand it, background subtraction is difficult because even a supposedly stationary camera can change focus or exposure (especially as the foreground object you are trying to capture moves around). Also the object moving will affect the light falling on various parts of the background - the most obvious example of this is a shadow or a reflection but there are indirect effects too.

So when comparing the pixel on frame n to the reference image, you need a threshold of values to cover these variations. Too low and you incorrectly pick up background as foreground, too high and you incorrectly label foreground as background. There is a sweetspot where these errors are minimal (but usually it's not possible to get this to zero). The real problem is that the sweetspot on frame n may not be the same as the sweetspot on frame n+1, because of the changes in camera and scene characteristics I mentioned above. So using the same threshold across a video means almost every frame is non-optimal. You really need to do it manually to get decent results.

Compare this to greenscreen, where you can remove the background with a great degree of accuracy because you determine from the outset that the background is a single colour which is significantly different to any of the foreground colours, so you can use a wide range in the removal and still not lose any foreground, and no manual intervention is required.

The motion detection (gait recognition) system doesn't use a reference image, it just compares frame n with frame n-1, and where there are differences it computes the likeliest direction and speed of motion of that pixel between frames. So you get a kind of "velocity map" if you like, of pixel movements in image space. Of course there are erroneous results, but it is detecting objects covering hundreds of pixels, so averaging techniques can work against this. It also uses edge detection and suchlike IIRC to help clean up the results.

To give an idea of the precision here, it had a 3D awareness (so it could pick up people walking towards/away from the camera etc), and could calculate stride characteristics, joint angles and rotations etc with enough precision to identify individuals in the sample sets by the way they walk.

As I said, I didn't work on it directly so this is all info I acquired from colleagues and saw in demos/videos, and has subsequently been rattling around in the back of my brain for years. I had a quick look for a video of the intermediary stage but I couldn't find one, which is a shame because it's illuminating and very interesting.
 
Having not worked in this field or anything like, you're obviously way more clued up than me and i can only go by what I hear from other sources. In this case, why is "In The Movies" so poor and why do the EyeToy people asy it can't be done? Are they missing something?

It could be something as simple as just not having enough processing resources or memory resources for this type of application when taking a simple feed from a web cam/video camera.

Then again, perhaps catisfit could clue us in on what processing hardware was being used for the work he was doing. I'm extremely curious as it's very interesting technology. It's always interesting (to me) see the progress made and methods used to attempt to replicate the image recognition that the human brain manages to do.

Additionally the US military has systems available that can track distant objects based on shape/movement. Although in their case, they also have absolutely massive computing resources to assist.

Anyway, until Sony shows that the system is capable of something more than simple party games or perhaps more intricate games with simple environments. I'm going to continue to conclude that the PS3 just doesn't have enough computing/memory resources to handle anything remotely like Natal in a demanding 3D game with advanced AI.

And yes I realize Natal hasn't shown anything of that sort either. However, considering that Natal does all the processing internally and then just sends positional tracking data to the X360, it's certainly closer to the low system usage common controller than a combination of Eye Toy + Wands doing 3D positional tracking and image/voice recognition.

Although having the positional tracking data limited to 30 times per second will obviously limit which applications this would be entirely suited for, IMO.

Regards,
SB
 
Last edited by a moderator:
Anyway, until Sony shows that the system is capable of something more than simple party games or perhaps more intricate games with simple environments. I'm going to continue to conclude that the PS3 just doesn't have enough computing/memory resources to handle anything remotely like Natal in a demanding 3D game with advanced AI.

Just out of curiosity, but what exactly has Microsoft shown that would suggest natal is capable of such?

We've seen a person make extremely broad movements for "paint" (none of which was accurate) and we've seen very general "limb" tracking in Ricochet. we've also seen her play with fish (again, nothing accurate requried).

lastly, we saw 1 character (milo) interact with a person, however, the experience, as many press members have pointed out, is potentially all puppet strings, as a developer was always connected to the system with a laptop, so Milo wasn't running on his own.

Right now, I'm far more tempted to consider nearly everything about Natal smoke and mirrors, as far as the "promises" go.
 
I showed my cat pictures of all three motion controllers today. She purred the most when shown the Wii Motion Plus. Does this mean it's better? Discuss.

Edit:

Reading through this link that someone posted previously http://en.wikipedia.org/wiki/Time-of-flight_camera, it looks like they most likely use the Z/depth camera to do the object detection. These cameras are very fast, up to 100 fps. They may say it is 30fps functionaly for games because they lose frames to their skeletal algorithms or error correction. Or maybe they are using a slower camera to make it cheaper for consumer electronics. Anyway, it seems that the time of flight camera is also an easy way to do object detection, so are we wrong in thinking that the RGB camera will be used for X,Y information? Maybe all of that is done with the time of flight camera, and the RGB camera is just there to map color info. It seems to me like they'd be using the time of flight camera almost entirely to do the skeletal tracking.
 
Last edited by a moderator:
Just out of curiosity, but what exactly has Microsoft shown that would suggest natal is capable of such?

We've seen a person make extremely broad movements for "paint" (none of which was accurate) and we've seen very general "limb" tracking in Ricochet. we've also seen her play with fish (again, nothing accurate requried).

lastly, we saw 1 character (milo) interact with a person, however, the experience, as many press members have pointed out, is potentially all puppet strings, as a developer was always connected to the system with a laptop, so Milo wasn't running on his own.
I've seen this comment made several times, mostly from you---where are you guys getting this from?

I think you are getting confused by the terminology. Molyneaux mentioned he let a developer 'drive' Milo...that's what Claire was. There was no man behind the curtain, all he meant was Claire knew how Milo behaved and knew how to 'trigger' certain events for him to respond.

We've seen plenty of demos, including impressions from people who have used it, of a retail 360 running a fast-paced and demanding game with Natal without any performance hit. So yes, we've seen it. But let's please stop with the accusations that Milo was a puppetshow, it's absolute nonsense. There are legitimate complaints and concerns about Natal, you'd best be served by focusing on those.
 
Right now, I'm far more tempted to consider nearly everything about Natal smoke and mirrors, as far as the "promises" go.
For all we know, both the Sony and the MS system could be smoke and mirrors at this point. No one but the engineer got to touch the Sony system, the same engineer who knows the limitations and would only show what is working. Or could just have been a well rehearsed hand/arm-sync performance to a pre-recorded video. At least MS let Gizmodo/engadget guys try the old "let's see if it still works if I do this" test.
 
The live demoes on stage should be real. Even if it's someone "rigging" it, it's probably to prevent failures. I have done such demoes and had to jump in last minute behind the scene to tie things over when unexpected event happened. I'd still consider it a working prototype though.

The concept videos may not.
 
I think he took it from the IGN impression.
This, I assume:
http://xbox360.ign.com/articles/991/991348p1.html

I'm also curious to know what level of influence a nearby Lionhead rep had directly over the demo itself -- as attached to the Xbox 360 running Milo, was a laptop with a read-out/ eyes-on view of people using the camera. It was never really clarified what the nearby Lionhead rep was doing to and with the Laptop and the information being sent to it, by the game -- outside of verifying new people entering the camera range and initializing some of the various moments in the demo.

A development app hooked up with a development controller needs a laptop to set things up...that's not surprising and a far cry from saying it is a puppetmaster.
 
Please stop kidding yourself. They have demoed nothing, nothing at all that is impressive enough that a responsive 2.5D camera deserves. (Note that I don't even care about resolution and accuracy)

Hell, let's forget impressiveness, how about some obvious implementations like better XMB controls, 3d object scanning or an actual full body motion sports game, be it yoga or boxing, or even jump counter would suffice.

And this is from fricking biggest software company of the world with pretty good R&D department.

Driving controls behind the close doors? Why is it behind the close doors to selected press members? Seriously why would a working, responsive control scheme require so secrecy and information filtering? It's not like they are not allowed to write about the system, which is the case for most behind the door showings.

Total bullshit. You go ahead be optimistic as much as you want for favorite console's gadget, or find idiotic press articles with no technical proficiency, only official press release language, whatever. But please, please for God's sake stop trying to make people believers based on nothing but imagination (aka potential).

You know, people have a right to be a little skeptical and wait for slightly convincing demos or even technical specs.

I'm really curious, which one of you here actually got what you were hoping for after hearing zdepth camera rumors?

The only advantage MS has right now,is that their hard tech is not out, thus not fixed, and can evolve in time, unlike say Sony which is stuck with PSEye.

But what MS demoed is nothing but a joke, in addition to ultra bullshit "concept videos", mind you.
Yeay for imagination!
So, uh, why didn't Sony demo real games then? It should have been childs play to modify some games and show them off with their motion controller.

Again, just because they _didn't_ do something doesn't mean they _couldn't_. Maybe there were rights issues with Burnout that meant they couldn't show it in public. Maybe it crashes sometimes and they didn't want to take the chance during the demo. Maybe they're kinda busy, i dunno... tweaking the technology to spend a lot of time hacking controls into games.

We'll find out more about the tech in time, but being so negative about it simply because they didn't show what you thought the should is as illogical as believing it will supercede controllers altogether.
 
This, I assume:
http://xbox360.ign.com/articles/991/991348p1.html



A development app hooked up with a development controller needs a laptop to set things up...that's not surprising and a far cry from saying it is a puppetmaster.

Yeah, in that impression... he basically confirmed that the Milo video is a target concept. The real one is far from it, though it shows potential for progress. The smooth speech recognition and meaningful exchange part is yet to be done, and the main reason why people called bluff on it.
 
http://www.youtube.com/watch?v=K0-4-FObaRU

A youtube video of a guesture controlled TV at CES this year. It provides a "mouse like" pointer that with some refinement could probably be used to control a FPS style game. (not that anyone would want to, but there have been some comments that this would be impossible in the natal world)
 
Also, note that it recognised two independent hands. Dual wielding shooting two enemies simultaneously FTW. (Sony's tech could also do this, it's not a specific tech observation, it's a "this would be insane to try with today's controller" observation.)
 
I think they may be talking about more subtle points (i.e., something else). I'll leave it up to those folks to clarify.
 
Natal has the potential to really offer something new, allowing dancing games and exercise games to mimic your body for instance, something which I'm sure could really be a hit with the casuals. Sure, some apps could be gimmacky but if the Wii and Wii board proved anything, people want novel experiences. But why would anyone already downplay this technology when the ideas are just brewing? You know it's almost certain that Sony and Nintendo are working on or plan to work on their own body motion sensing technology. And it'll be great to see their offerings. Why play the console wars here? MS was just first, that's all. This is just the beginning and it's pretty exciting.
 
http://www.youtube.com/watch?v=K0-4-FObaRU

A youtube video of a guesture controlled TV at CES this year. It provides a "mouse like" pointer that with some refinement could probably be used to control a FPS style game. (not that anyone would want to, but there have been some comments that this would be impossible in the natal world)
It's not the same as the pointing thing that appears very difficult. The whole hand is moving laterally. This could be calibrated to map position with screen, just like a mouse's small movements are mapped to screen position. It could also be clever and take the calibration as relative to the shoulder, so the user can change position without screwing up the aiming. It still doesn't demonstrate fine rotation detection as obtained with MEMs devices. The most surprising thing IMO is another 3D cam in operation. I didn't think there were set to go into the wild. This is good for the tech, and potentially encourages the adoption of Cell of similar solutions, as a powerhouse to drive both input tech and image/video processing tech. The alternative is lots of ASICs all doing specific jobs.

Incidentally, from a link at the above link, this 2008 clip from Samsung...

It appears every man and their dog will have this tech!
 
Natal has the potential to really offer something new, allowing dancing games and exercise games to mimic your body for instance, something which I'm sure could really be a hit with the casuals. Sure, some apps could be gimmacky but if the Wii and Wii board proved anything, people want novel experiences. But why would anyone already downplay this technology when the ideas are just brewing? You know it's almost certain that Sony and Nintendo are working on or plan to work on their own body motion sensing technology. And it'll be great to see their offerings. Why play the console wars here? MS was just first, that's all. This is just the beginning and it's pretty exciting.

Good point, but I think people are digesting all the motion technologies in this thread (What it is and what it's not). e.g., What do you think about the Vitality sensor ? Most responses may be negative but it doesn't necessarily mean we are fighting a console war, or we want Nintendo to die.

All 3 techs can bring in new audiences.


EDIT: bkilian and Shifty, if I need to perform that much action on a TV, I may prefer a simpler abstraction where I don't have to hang my arms out for so long.
 
So, uh, why didn't Sony demo real games then?
Because real games are not ready?
It should have been childs play to modify some games and show them off with their motion controller.
I doubt that considering most of their games use totally different control scheme which would look nothing but stupid with gestures only. They could of course demo Burnout Paradise with tilt sensing, but then BP already works with SixAxis tilting. ;)

It's not that Sony tech demos were impressive, but at least they showed things that works properly and useful.

Again, just because they _didn't_ do something doesn't mean they _couldn't_.
It's nice to be a faithful believer, but I personally like to think "just because they _didn't_ do something doesn't mean they _could_. ". Yep I know, obvious.
Maybe there were rights issues with Burnout that meant they couldn't show it in public. Maybe it crashes sometimes and they didn't want to take the chance during the demo.
Of course there may be a lot of reasons for not wanting to show things.
Maybe they're kinda busy, i dunno... tweaking the technology to spend a lot of time hacking controls into games.
Who cares about hacking controls into games? I understand a couple of people here likes to tilt their head while playing an FPS with regular controller but come on. You have 2.5D camera possibly with builtin skeletal detection system, what would you show?
My original question stands, what were you expecting to see after the rumors and what did you see?
We'll find out more about the tech in time, but being so negative about it simply because they didn't show what you thought the should is as illogical as believing it will supercede controllers altogether.

It's not illogical at all that full motion body control will supersede controllers all together.
It may not happen soon, but saying it won't happen is being negative, and short sighted.

Attempts to defend MS like "they didn't have time to show off what it can do", "there may be licensing issues" (as if they had to demo BP as opposed to two MGS racers), "hacking games take times" are however quite frankly wishful thinking.

It's responsibility of MS to convince people, not responsibility of people to be convinced.
 
http://www.youtube.com/watch?v=K0-4-FObaRU

A youtube video of a guesture controlled TV at CES this year. It provides a "mouse like" pointer that with some refinement could probably be used to control a FPS style game. (not that anyone would want to, but there have been some comments that this would be impossible in the natal world)

This is what we were saying IS possible on natal. It is totaly different to how Wii and PSMC do it. In this the whole hand/arm must move on the xy axis, what you want from a pointing device it the ability to keep the device at the same point on the xy axis and use simply the angle and rotation of the device for pointing.
 
Status
Not open for further replies.
Back
Top