Old Discussion Thread for all 3 motion controllers

Status
Not open for further replies.
This is somthing i've been questioning for a long time... The Wii has done excellently in producing and selling appealing videogame and non-videogame software to over 50 million households worldwide, with a controller that is arguably a little more complex than a SNES controller.

Some of these casual folks who bought the Wii were previously non-gamers predominantly, and constantly express their gaming preferences in their purchases of pseudo-games like Wii-fit, Just-dance etc...

However, many are very happy to continue with their Wii gaming system and do not see the Wii-Mote, balance board etc as a barrier to entry into video gaming.

The balance board is probably closest to the the demographic that MS is trying to create/exploit with Natal.

It's a controller that requires no learning. You don't have to remember what button does what, you don't need to look in the manual or the button config screen to figure out which button does jump/shoot/select/deselect/options/whatever.

You just start the game and use it. I know a few households where certain members of the family only use Wii Fit (or similar) and rarely ever touch the regular Wii controls.

Now, will that translate over to Natal? And more importantly is there actually a large demographic of people who are disinclined to using a console controller but are interested in console gaming? That's something we won't know until Natal launches.

And that's only MS's target. 3rd parties obviously have a lot more latitude in what they choose to target with the system (For example Capcom with their rejuvenation of a past game). As such, I wouldn't be surprised to see a game using the standard X360 controller and using Natal to enhance the experience. Although I'm fully expecting most of the 14 launch titles to be using Natal exclusively in hopefully innovative game types.

I'm trying my best not to think too much on this until E3. :) But I always find the possibilities fascinating. And I'm still more interested in Natal as it relates to the PC than the console. heh.

Regards,
SB
 
I'm still pissed that face maping hasn't been done more. Its such an awsome feature. Imagine if in fall out 3 or oblivion they used the camera to make your face in the game as your character.

I hope natal allows for this. Otherwise I"m a sad man.
 
I'm still pissed that face maping hasn't been done more. Its such an awsome feature. Imagine if in fall out 3 or oblivion they used the camera to make your face in the game as your character.

I hope natal allows for this. Otherwise I"m a sad man.

I'm sure it will be, it's been 'on the cards' since the first eyetoy hasn't it!?
 
I'm still pissed that face maping hasn't been done more. Its such an awsome feature. Imagine if in fall out 3 or oblivion they used the camera to make your face in the game as your character.

I hope natal allows for this. Otherwise I"m a sad man.
Actually Rainbow Six has facial mapping.
 
Back on topic please....

A light projector? Is it projecting visible light to light up the room?
I hope that means infrared light, otherwise this is just another camera device that won't work well with front projectors and darkened rooms.

My understanding is that time of flight cameras use pulsed light in non-visible parts of the spectrum. The pulses are generally very, very bright, but incredibly short.

basically...

You fire a very bright pulse of light - at the same time activating the sensor. Each pixel in the depth sensor is effectively a timer, rapidly counting up. When it receives the reflected light pulse back, the energy produced by the sensor stops the timer.
The hardware then reads the timers (pixels).
The times involved are literally nanoseconds, meaning the timers have to be running at crazy high speeds (well into the gigahertz range).

When I first heard about them my instinctive reaction was "that can't be possible". But when you do calculate things, you realise it is possible. It's easy to forget just how fast processors are running now days :mrgreen:



I know they are old, but those are really fun to read :yes:
 
My understanding is that time of flight cameras use pulsed light in non-visible parts of the spectrum. The pulses are generally very, very bright, but incredibly short.

basically...

You fire a very bright pulse of light - at the same time activating the sensor. Each pixel in the depth sensor is effectively a timer, rapidly counting up. When it receives the reflected light pulse back, the energy produced by the sensor stops the timer.
The hardware then reads the timers (pixels).
The times involved are literally nanoseconds, meaning the timers have to be running at crazy high speeds (well into the gigahertz range).

When I first heard about them my instinctive reaction was "that can't be possible". But when you do calculate things, you realise it is possible. It's easy to forget just how fast processors are running now days :mrgreen:

initially the unit had (or rumoured) processors on board to hel out.

WRT the tech, how will this translate (in laymans terms) to making this work better than the current cameras out? does it mean you can play with the lights off?
 
WRT the tech, how will this translate (in laymans terms) to making this work better than the current cameras out? does it mean you can play with the lights off?
Yes, and it should be very robust in any lighting conditions. More importantly (although perhaps that's arguable!) it can perceive depth, which means it has a better source of data to determine the outline of a person. Ignoring the 3D skeleton tracking, the accuracy of the 2D placement could be better than camera based solutions, and require less processing.
 
fingers crossed that it's accurate with little lag then, there's nothing more frustrating than games without direct tacktile input (controllers) being let down by inconsistent results (wiimote/eyetoy)...it would be great to finally get a robust system in place the gamer can trust! :)
 
My understanding is that time of flight cameras use pulsed light in non-visible parts of the spectrum. The pulses are generally very, very bright, but incredibly short.

basically...

You fire a very bright pulse of light - at the same time activating the sensor. Each pixel in the depth sensor is effectively a timer, rapidly counting up. When it receives the reflected light pulse back, the energy produced by the sensor stops the timer.
The hardware then reads the timers (pixels).
The times involved are literally nanoseconds, meaning the timers have to be running at crazy high speeds (well into the gigahertz range).

When I first heard about them my instinctive reaction was "that can't be possible". But when you do calculate things, you realise it is possible. It's easy to forget just how fast processors are running now days :mrgreen:

If Natal is indeed based on the PrimeSense tech, and i believe it is, it doesnt use time of flight. Instead a IR pattern is projected into the room and its the distortion of this pattern, viewed by a standard CMOS sensor, that gives the depth information once processed. Thats my understanding of it anyhow.

From PrimeSense website:
PrimeSense technology for acquiring the depth image is based on Light Coding™. Light Coding works by coding the scene volume with near-IR light. The IR Light Coding is invisible to the human eye. The solution then utilizes a standard off-the-shelf CMOS image sensor to read the coded light back from the scene. PrimeSense’s SoC chip is connected to the CMOS image sensor, and executes a sophisticated parallel computational algorithm to decipher the received light coding and produce a depth image of the scene. The solution is immune to ambient light.
 
I've just pruned back the game genre distribution branch of this thread. Although there's a valid debate about latent interest in the current install bases and future perceptions of the consoles, and the impact on future adoption of the motion controllers, we don't really need to go into the nitty-gritty details of genre presence across the platforms in this motion controller thread. Please keep these branches to a higher level and if ideas need exploring further, start a new thread. Thanks.
 
If Natal is indeed based on the PrimeSense tech, and i believe it is, it doesnt use time of flight. Instead a IR pattern is projected into the room and its the distortion of this pattern, viewed by a standard CMOS sensor, that gives the depth information once processed. Thats my understanding of it anyhow.

From PrimeSense website:

Well that's interesting. I had no idea. :mrgreen:
I went hunting for info on depth cameras a while ago, and never came across PrimeSense. I had simply assumed natal would be very similar to the ZCam, given Microsoft's purchase of 3DV.

The PrimeSense reference design certainly looks eerily similar to natal. If their reference specs are similar too, then it's quite the powerful device. The 1600x1200 RGB camera in particular caught my attention, while it's seriously unlikely it'll manage video at that resolution, it does open up a lot of potential possibilities for high quality UGC etc (especially combined with the depth cam).

Interesting their website seems to imply some level of skeletal tracking too. "Gesture API demo" etc

[edit]

Yeah, looks likely it is PrimeSense:

“Microsoft chose to purchase 3DV because it had an interesting set of patents on the hardware side and they chose Prime Sense as a supplier because their technology is going to get to the market in one year, whereas 3DV is more like two to three years,” said Mr MacDougall.

....

As for the Arc,
I was thinking (and this may be public knowledge I'm not aware of), I am wondering how the controller senses depth from it's camera. I wouldn't imagine the resolution of the 2D image would be enough to calculate it (especially if it's partially occluded) so I'm wondering if it emits an ultra high pitch sound, which the PSeye can then pick up. Using pulses or phase change you can do very accurate depth calculation (assuming no major sound reflection).
Given an initial approximate guess based on the video stream, that could work. I guess.
 
Because of the simple geometric shape you can do a subpixel accurate measurement of the size of the ball, I haven't done the math but I doubt they need more than that for the depth calculation.
 
If Natal is indeed based on the PrimeSense tech, and i believe it is, it doesnt use time of flight. Instead a IR pattern is projected into the room and its the distortion of this pattern, viewed by a standard CMOS sensor, that gives the depth information once processed. Thats my understanding of it anyhow.

From PrimeSense website:

I wouldn't say it was based on PrimeSense but rather similar to it. Parallel but independant research tracks. When it came time to implement MS had to license, crosslicense, or purchase anyone that held patents relevent to what they were planning to market.

Now, it's possible that once they got to that point and saw how similar PrimeSense was they shortened R&D on the camera by leveraging what PrimeSense had done up to then.

Regardless, the camera is far less interesting to me than the image recognition, tracking, and prediction software they have been developing to drive the whole thing. In a sense, the camera is the easy part and the only challenge there is how to reduce cost.

For example, the ability for Natal to accuratly track occluded body parts due to the motion and posture of the rest of the body is particularly impressive, especially more so the fact that it can do it with a range of body sizes, shapes, and individual differences in subtle body motion. IE - not everyone's body moves the exact same way when putting their arm behind their back for example.

Now compound that with multiple bodies in the camera's FOV, and the software being able to not only differentiate, but also continue to track multiple occluded body parts belonging to multiple different people.

And that's just one aspect that I find fascinating. :) Camera is completely boring in comparison. :)

Regards,
SB
 
Now compound that with multiple bodies in the camera's FOV, and the software being able to not only differentiate, but also continue to track multiple occluded body parts belonging to multiple different people.
Once you've got a lock, it's not so hard to follow a point, as human motions are speed limited to a readily predictable scale. What will prove the power of their skeleton evaluation is how the device copes in more complex scenarios than people waling into frame which is all that's been demo'd so far AFAIK. eg. If two people entered stage right at the same time, would the skeleton tracking find and lock onto them? I expect not. It'd be an incredible piece of software if it did! Instead it'll need participants to be spacially isolated so that it can find the limbs and get a lock. I expect one could confuse it quite readily if one wanted to. But once it has a lock, following the limbs should be accurate. eg. If I start with my left arm full extended out to my side, Natal will lock onto elbow and wrist. If I then bend my arm forwards so the hand ends on my shoulder, Natal will be able to follow that. However, if enter frame with that end positiin, my elbow sticking out but my hand invisible, it should get confused. If it doesn't, that'll be an absolutely stellar piece of software design! Of course the system shouldn't be expected to deal with trouble makers trying to mess it about, aiming instead to lock onto and track normal use, so it wouldn't be wrong for Natal to get bamboozled in such situations. The point of all this waffle of mine is just to highlight that once you have a lock, tracking where a limb is shouldn't be too hard, even when occluded.
 
Status
Not open for further replies.
Back
Top