Old Discussion Thread for all 3 motion controllers

Status
Not open for further replies.
I think they meant throughout the entire demo. The z camera technology has solid foundation. They should be able to do some cool things with it ! The 3D imaging portion would be useful for Playstation Home :)

The unknown is its response time and resolution. We'll have to wait patiently for MS to release those info. They should be pretty competitive given that Sony and Wii are old hands at this.
 
If the avatar is not a 1-to-1 mapping of the actual data, you may not be able to judge Natal's performance based on the avatar animation. You'll have to look at the raw data (or processed data within Natal itself).

One of the articles mentioned that the avatar didn't jump even though he could see the analysis screens where the system saw him jumping just fine.

So basically the avatar was just not allowed to go vertical. I'm also pretty sure if he had tried to run side to side his avatar wouldn't have left the rough middle of the screen. Basically to keep you in the boundaries of the breakout "game."

Regards,
SB
 
So you make a full body motion "breakout game" but decided against jumping because of boundaries.

Yep that makes total sense.
 
So you make a full body motion "breakout game" but decided against jumping because of boundaries.

Yep that makes total sense.

yea.... everything that is and will ever be possible with this system was shown to us in a 20 minute controlled demonstration of possibilities before SDKs were even in developers hands.
 
yea.... everything that is and will ever be possible with this system was shown to us in a 20 minute controlled demonstration of possibilities before SDKs were even in developers hands.

Please stop kidding yourself. They have demoed nothing, nothing at all that is impressive enough that a responsive 2.5D camera deserves. (Note that I don't even care about resolution and accuracy)

Hell, let's forget impressiveness, how about some obvious implementations like better XMB controls, 3d object scanning or an actual full body motion sports game, be it yoga or boxing, or even jump counter would suffice.

And this is from fricking biggest software company of the world with pretty good R&D department.

Driving controls behind the close doors? Why is it behind the close doors to selected press members? Seriously why would a working, responsive control scheme require so secrecy and information filtering? It's not like they are not allowed to write about the system, which is the case for most behind the door showings.

Total bullshit. You go ahead be optimistic as much as you want for favorite console's gadget, or find idiotic press articles with no technical proficiency, only official press release language, whatever. But please, please for God's sake stop trying to make people believers based on nothing but imagination (aka potential).

You know, people have a right to be a little skeptical and wait for slightly convincing demos or even technical specs.

I'm really curious, which one of you here actually got what you were hoping for after hearing zdepth camera rumors?

The only advantage MS has right now,is that their hard tech is not out, thus not fixed, and can evolve in time, unlike say Sony which is stuck with PSEye.

But what MS demoed is nothing but a joke, in addition to ultra bullshit "concept videos", mind you.
Yeay for imagination!
 
One of the articles mentioned that the avatar didn't jump even though he could see the analysis screens where the system saw him jumping just fine.

So basically the avatar was just not allowed to go vertical. I'm also pretty sure if he had tried to run side to side his avatar wouldn't have left the rough middle of the screen. Basically to keep you in the boundaries of the breakout "game."

What I meant was: Regardless of what they do with the avatar, if the dataset has been fudged by the avatar developer, we may not be able to tell Natal's full capability by looking at the avatar. You want to look at the original data instead.

betan said:

Wow, that's too harsh. The z camera and 3D mesh system should be working in lab condition. It might crash and have limited scope, but the basic principle should be verifiable now.
 
The elephant thing was just detecting his silhouette at that point. Hardly amazing from a technology stand point.
It was perfect background removal. Stand in front of a webcam with all the gubbins of a room behind and have it cut out your silhouette, then come back and say it's still no big deal! Imaging with the depth dimension is an amazing technology. It's like bluescreening without the blue screen.
 
It was perfect background removal. Stand in front of a webcam with all the gubbins of a room behind and have it cut out your silhouette, then come back and say it's still no big deal! Imaging with the depth dimension is an amazing technology. It's like bluescreening without the blue screen.

I see what you are saying, but you could do the same by imaging the background beforehand, if the background is stationary. For a much more advanced example, you can do the same by calculating motion between frames (I have forgotten the name of this technique). I worked alongside a team many years ago who were doing gait recognition - being able to identify people based on the way they walk. Regardless of background (including moving backgrounds), they were picking up the motion of people as they moved across the frame, with enough accuracy that they were talking about using it for CCTV identification of people in balaclavas.

The point is, even when the subjects stopped, they knew where they were based on the last recorded motion. The elephant could be done using this technique quite easily, and this was a long time ago using a standard camera.

I'm well aware that 2D depth imaging as Natal does has other benefits over this old technique, but I'm just pointing out that the elephant is not one of them. As for greenscreen, well yes if you're talking about weather presenters, but cinematic greenscreen often has objects with which the actors interact, which would throw up issues for a 2D depth camera. Plus of course there's resolution issues with 2D depth imaging.
 
I see what you are saying, but you could do the same by imaging the background beforehand, if the background is stationary.
Yes, but that's generally not very effective, as voiced by the developers of EyeToy over "In The Movies", and then evidenced in "In The Movies"!

The point is, even when the subjects stopped, they knew where they were based on the last recorded motion. The elephant could be done using this technique quite easily, and this was a long time ago using a standard camera.
Well, if it can, no-one has done it wokring well in the living-room environment with a webcam. Although this wasn't in the living-room environment either actually. There was a lot of space behind them. Throw in a setee at closer range and the resolving power becomes reduced, though I expect it to still work very well. It was intersting seeing the 'warp' of the silhouette though.
 
It was perfect background removal. Stand in front of a webcam with all the gubbins of a room behind and have it cut out your silhouette, then come back and say it's still no big deal! Imaging with the depth dimension is an amazing technology. It's like bluescreening without the blue screen.

Was thinking about possible solutions to this with standard 2d cams. Would it not be possible to take a image of the room before anyone is in front of the camera, then when someone is there you could remove the background by comparing the image of the empty room with the current view. In theory any changes to the original empty room image could be detected and you would have a silhouette of any changes.

EDIT: Nevermind catisfit already made that point.
 
That's how "In The Movies" tried, and it failed. The EyeToy people said they thought it would fail. It's not as easy as just subtracting the backgroun. You need a completely stable environment, or some serious processing to determine what is background and what isn't by some form of context.
 
That's how "In The Movies" tried, and it failed. The EyeToy people said they thought it would fail. It's not as easy as just subtracting the backgroun. You need a completely stable environment, or some serious processing to determine what is background and what isn't by some form of context.

But not for that reason I think? I think they pointed out that the Vision camera (which has no additional features, but is just a basic webcam) in combination with the light-based processing they were doing wasn't the right way to do things.
 
I think Obonicus is right, I can't see ms waste resources on fingers themselves but it looks like they keep track of hands:
Alex Kipman said:
We graph 48 joints in your body and then those 48 joints are tracked in real-time, at 30 frames per second. So several for your head, shoulders, elbows, hands, feet...
In our case I feel like the points being track for the wrist and the hand are track together.
For finger I think (if Ms dare to implement) they should rely on 2D shape recognition, the PSP can do this, even in a case where the Natal hard has its hands full it wouldn't not be that hard (or costly perf wize) to have the xenon process the data retrieve from Natal (say the part of screen supposed to your hands).
 
But not for that reason I think? I think they pointed out that the Vision camera (which has no additional features, but is just a basic webcam) in combination with the light-based processing they were doing wasn't the right way to do things.

How does this contradict Shifty? The simple method they are discussing is image based only.
 
Yes, but that's generally not very effective, as voiced by the developers of EyeToy over "In The Movies", and then evidenced in "In The Movies"!

I know, that's why I gave a more advanced and realistic example.

Well, if it can, no-one has done it wokring well in the living-room environment with a webcam.

This was research for "worthy" applications like catching criminals, but it was using standard cameras and rudimentary CV techniques. I wasn't working on it directly and I haven't done any CV since, so I'm hardly current, but neither is the technology that it was running on. They were deliberately testing it in external environments with trees blowing in the wind, cars moving in the background etc and it held up very well.
 
Are we assuming that if the camera can pick out the sillhouettes of someone's fingers it therefore can track them? Aren't we talking about entirely different techniques?
 
This was research for "worthy" applications like catching criminals, but it was using standard cameras and rudimentary CV techniques. I wasn't working on it directly and I haven't done any CV since, so I'm hardly current, but neither is the technology that it was running on. They were deliberately testing it in external environments with trees blowing in the wind, cars moving in the background etc and it held up very well.
Having not worked in this field or anything like, you're obviously way more clued up than me and i can only go by what I hear from other sources. In this case, why is "In The Movies" so poor and why do the EyeToy people asy it can't be done? Are they missing something?
 
Please stop kidding yourself. They have demoed nothing, nothing at all that is impressive enough that a responsive 2.5D camera deserves. (Note that I don't even care about resolution and accuracy)

Hell, let's forget impressiveness, how about some obvious implementations like better XMB controls, 3d object scanning or an actual full body motion sports game, be it yoga or boxing, or even jump counter would suffice.

And this is from fricking biggest software company of the world with pretty good R&D department.

Driving controls behind the close doors? Why is it behind the close doors to selected press members? Seriously why would a working, responsive control scheme require so secrecy and information filtering? It's not like they are not allowed to write about the system, which is the case for most behind the door showings.

Total bullshit. You go ahead be optimistic as much as you want for favorite console's gadget, or find idiotic press articles with no technical proficiency, only official press release language, whatever. But please, please for God's sake stop trying to make people believers based on nothing but imagination (aka potential).

You know, people have a right to be a little skeptical and wait for slightly convincing demos or even technical specs.

I'm really curious, which one of you here actually got what you were hoping for after hearing zdepth camera rumors?

The only advantage MS has right now,is that their hard tech is not out, thus not fixed, and can evolve in time, unlike say Sony which is stuck with PSEye.

But what MS demoed is nothing but a joke, in addition to ultra bullshit "concept videos", mind you.
Yeay for imagination!

Software and hardware gets demoed behind closed doors all the time. They do that because the stuff is under development and is not ready to be shown on the show flow.

Demoed nothing at all? Nothing but a joke?

I'm not kidding myself. It won't be absolutely perfect. But by the time it's out, I think it'll be fairly functional. Microsoft wouldn't invest huge money into a potentially expensive peripheral and then actually show it to people if they weren't confident they'd be able to get it working at a reasonable level.

Getting what we want from it is relative. Some people are going to expect way too much. To me, it doesn't really matter. The technology is interesting, so I want to play with it and see how it works. I'm expecting the initial games to be very basic along the lines of Wii sports, but down the road maybe we'll get something a bit more challenging. The interface elements are something I'm interested in as well.
 
Status
Not open for further replies.
Back
Top