Sony's New Motion Controller

Another question that surface my mind. It seems Microsoft has been very careful with the patents issue, they have adquired 3DV to protect its creation. Do you think Sony could be able to mimic the 3d camera without violating any patents ?. For sure they will try it for PS4.

One more thing. Now Nintendo has lost a lot of its "momentun" like people like to call know the "I am cool" perception. What could they make to cath up again ? I can only think about 3d glasses or a walking device that allow virtually walking withou really advancing... ( you know ;) )
 
Actually adoption of sixaxis isn't even that poor. But for both PS Eye and sixaxis, they had too much limitations.
I disagree. Most of Milo could be done with PSEye. Sixaxis has a truckload of possibility - you and I both listeed opportunities when the technology was announced. It certainly isn't the hardware technology holding these devices back.
 
From what I can tell from patents, the system will be using sound waves to measure distance. I know a Phd student who did his doctorate on exactly this, and while this type of technology is *exceedingly* accurate (this will be where the sub-mm claim came from) it *does* have limitations and appears it's only being used for distance (you need triangulation otherwise).
The distance will be being calculated by change in phase of the received signal (given the camera has a mic).
The trouble comes from rapid changes in movement (if you aren't sampling high enough you can momentarily lose accuracy) but far more significant are echoes. Large flat surfaces (walls!) and sometimes occlusion of the emitter (the remote) can make processing the audio much harder.

Of course they cannot rely on distance measurement by sound with high sample rate all the time as well as they cannot assume that the bulb will be visible all the time, when you are swinging your virtual golf club. The controller will contain accelerometers just like the sixaxis, the nun-chuck and the wii-mote (probably six of them) that will make it possible to calculate the position with good precision by dead reckoning over shorter periods.

It´s probably the accelerometers that also make it possible to detect small movements, changes in pitch and roll etc.
 
The really worrying part for me is that the E3'09 showing is the same tech as the YouTube vid from '07. It seems unprogressed in 2 years! Why don't they have some actual game WIPs to show? It looks like Sony have been sitting on this RnD tech for years, and I have little hope that they have real products in the work. I'm sure this week there'll be SDKs going out to first party studios with orders 'do something amazing with this!' but Sony should be in far stronger software position. Not only should be, but should be demonstrating it with more than cheap, ancient demos. With all the time they've had, this E3 they should have had some real games (Resistance 2 patched), head tracking, etc. The fact they haven't can only really mean they haven't been working on it.

Which 07 Youtube vid? Is it the one where the guy is showing off spells with the glowing orb?

Either way, I think showing some Tech Demo's was a good way to illustrate it's accuracy over the competition, and really did well to sell the product.

You'll have to link me to this youtube vid so I can comment further.
 
Having used similar setups in the past these things are often much less useful than they appear.
The 'building blocks' is perhaps the best example. The accuracy was very good, but the body simply isn't used to the lack of physical sensation in a system like this (especially when placing objects). I haven't seen a system like this where there isn't a more accurate conventional alternative. He was right that it's a very tough challenge - but that's not due to accuracy, it's due to how the brain expect things to react in the hand. Unless you get a full haptic system it just isn't that practical.
Basically, it's not so much a limitation of the tech it's a limitation of the application.

The big issues is these sorts of systems tire the body *amazingly* quickly due to the confused state the brain gets in due to lack of feedback, and the result is very tense muscles and jittery reactions. High accuracy tasks like this are *very* hard to keep doing for even a modest length of time.

I have the same concerns. These challenges affect all motion sensing solutions though; it's worst for pure hands free products like PS Eye and Natal. It also affects all use cases, not just the super sensitive one Sony demoed. I guess this is why they added a physical controller after the PS Eye product.

As for how the brain expects things to react in hand, I agree too (having played Operation Creature Feature for many times). That's why I cup 2 hands to create the sense of weight when carrying the critters. It's easier to control that way. Otherwise, my hands are too floaty and hard to control. It makes the whole exercise even more tiring though.

I also think that the sensing is only half a product. The other half is feedback and sense of weight. With PS Eye and Natal, you could grab onto something (or use both hands) to simulate that manually (Ha ha, no choice). With an in-hand controller, it will inherently convey the sense of weight, but will rely on the PS3 to deliver some feedback. That's why I think there should be a pressure sensitive button to help the console know how hard the feedback should be (basically simulate grip).

Finally, Sony needs to determine the use case for a virtual object manipulation application. Not sure if they can find one. The precision may be for core gaming activities like aiming and firing.

The bow and arrow demo was impressive. The directional accuracy was quite noticeably better here, (I'd expect due to using the two points of reference instead the systems motion trackers). However when they were close together you could see the system losing accuracy and when aiming the system seemed to lose sight of the rear tracker (his aim went all over the place even though he wasn't moving in the video). This is just a physical limitation however it's still interesting to note.

From what I can tell from patents, the system will be using sound waves to measure distance. I know a Phd student who did his doctorate on exactly this, and while this type of technology is *exceedingly* accurate (this will be where the sub-mm claim came from) it *does* have limitations and appears it's only being used for distance (you need triangulation otherwise).
The distance will be being calculated by change in phase of the received signal (given the camera has a mic).
The trouble comes from rapid changes in movement (if you aren't sampling high enough you can momentarily lose accuracy) but far more significant are echoes. Large flat surfaces (walls!) and sometimes occlusion of the emitter (the remote) can make processing the audio much harder.

This is probably why they haven't released it yet.

My final thought is also rather depressing. The PS3 already has two motion control options (sixaxis and eyetoy) yet these have seen very poor adoption. While this system is clearly more accurate than either, it's also the most expensive and, importantly, the least complimentary to existing control methods.
I worry the competition has far better incentive for developers to take advantage of their technology.

Yes ! If and when they drop the price, then people will pay more attention. They are not after the casual crowd yet. So we won't see any consumerish push from Sony -- which is, you said it, $^#&*^$&#@v depressing. :)

I won't write off MS making a waggle controller too.
 
I guess what I will say is that in order to succeed in this game, you need to take risks. People respond to things that are new and innovative. For me, this system lacks that spark of imagination. As technically proficient as it is, I just don't see the market responding to it.

I think you're framing it in the wrong light, though. Sony (and MS, really, but mostly MS) are bringing this out as a test, because they know who's they're going to have to chase next-gen. They certainly don't want to be left out of third-party efforts because they lack the hardware.
 
I don't see how that holds, Microsoft's solution has the least compatibility with the standard (Wii-mote). Natal games will have to be very specifically designed for it, control schemas for Wii-mote/Waggle targeted games will simply not fit it ... unless they compromise and include a wand.

PS. PS3's solution the most expensive? It has the same major component costs as a Wii-mote (CCD and a wireless controller). Microsoft definitely will have the highest hardware costs.
 
Last edited by a moderator:
I don't see how that holds, Microsoft's solution has the least compatibility with the standard (Wii-mote). Natal games will have to be very specifically designed for it, control schemas for Wii-mote/Waggle targeted games will simply not fit it ... unless they compromise and include a wand.

PS. PS3's solution the most expensive? It has the same major component costs as a Wii-mote (CCD and a wireless controller). Microsoft definitely will have the highest hardware costs.


Nintendo's is the cheapest because it doesn't include any camera functions. Eye-toy is part of the Sony solution, so you can't count the cost separately.
 
I disagree. Most of Milo could be done with PSEye. Sixaxis has a truckload of possibility - you and I both listeed opportunities when the technology was announced. It certainly isn't the hardware technology holding these devices back.

Well, sorta, assuming you have the PS3 do all the processing that is done within the Natal system itself.

But then if you have that, the PS3 will have less resources for an actual application. I think part of the genius of the Natal system isn't only the fact that that it can do visual motion tracking, etc, but that it apparently offloads almost all of the work from the console itself.

From what I got from the presentations and articles that have been written. The Natal device itself does all the work with regards to emotion detection, image recognition, depth perception, seperating multiple audio sources, seperating multiple "bodies," etc.

I find it questionable that a combination of Eye-Toy + Wand would be able to accomplish what Natal does and still leave a similar amount of resources available on the console for applications.

Regards,
SB
 
I don't know how expensive the EyeToy really is though. And from reading the Iwata interview on the genesis of WiiMotionPlus, I think the price of that can be underestimated a little as well!

By the way, someone elsewhere posted this link to a BBC video (which by the way seems to feature Ellie from Eurogamer?) where she gets to test EyePet a little live. I think that if you watch carefully, you can still see why just using the PS Eye is so much more limited than using the Natal camera. It's also definitely not lag-free. Mind you it's still very cool and cute, but if you look at it from a technical point of view and think about the work you have to do as a programmer to get this done right, it's easy to see why this just isn't good enough for more than a few games.

http://news.bbc.co.uk/1/hi/entertainment/8083046.stm

By the way, Patsu, when you play operation creature feature (which is a very cool EyeToy game by the way, one of my favorites after Eye of Judgment, from which EyePet obviously takes a cue card ;) ), you'll also see that the software is mostly just sensitive to movement. So a simple trick is to wiggle your hand or even just your fingers a little, and the creatures will stick to your hand like glue.
 
I don't see how that holds, Microsoft's solution has the least compatibility with the standard (Wii-mote). Natal games will have to be very specifically designed for it, control schemas for Wii-mote/Waggle targeted games will simply not fit it ... unless they compromise and include a wand.

That's why I think it's mostly Sony. I do think that MS will revamp this solution by next-gen -- I'm not convinced that whatever we see this gen will be their full commitment to motion controls. Again, this more for Sony than MS. (I'm not saying they'll change up the technology, but they may change the packaging depending on how the wind blows.)

Nintendo's is the cheapest because it doesn't include any camera functions. Eye-toy is part of the Sony solution, so you can't count the cost separately.

I think he's just counting the components, since the retail price is probably nowhere near the actual price of the product.
 
Well, sorta, assuming you have the PS3 do all the processing that is done within the Natal system itself.

But then if you have that, the PS3 will have less resources for an actual application. I think part of the genius of the Natal system isn't only the fact that that it can do visual motion tracking, etc, but that it apparently offloads almost all of the work from the console itself.
Right, but it can't be that hard if a tiddly little processor is handling it. Assuming it is a tiddly little processor. I don't think processor consumption is going to be an issue, with PS3 struggling to keep up with all the jobs asked of it. The PSP is managing some good augmented reality now at 300 MHz ;)

From what I got from the presentations and articles that have been written. The Natal device itself does all the work with regards to emotion detection, image recognition, depth perception, seperating multiple audio sources, seperating multiple "bodies," etc.
I haven't seen those articles. You'll have to link them! I don't know what the details are.

I find it questionable that a combination of Eye-Toy + Wand would be able to accomplish what Natal does and still leave a similar amount of resources available on the console for applications.
:oops: You think Natal is packing that much CPU power?!
 
I don't know how expensive the EyeToy really is though. And from reading the Iwata interview on the genesis of WiiMotionPlus, I think the price of that can be underestimated a little as well!
Non recurring costs might be high, but the production costs are almost certainly very small ... and I doubt they developed the sensor entirely in house, presumably other industries will benefit from better MEMS gyroscopes as well.
 
I think as silent budha, Ms states that Natal could works with some "pre natal" games, the games can be patched to handle the new input but it's unlikely have the cpu ressource do deal with the amount of data generated by Natal.
engadget said:
The first thing to note is that Microsoft is very protective of the actual technology right now, so they weren't letting us film or photograph any of the box itself, though what they had was an extremely rough version of what the device will look like (not at all like the press shot above). It consisted of a small, black box aimed out into the room -- about the size of a Roku Player -- with sensors along the front. It almost looked a bit like a mid-size (between pico and full size) projector.
a photo of Roku Player:
Netflix_Player_by_Roku_1.jpg


If Ms want to make the most of their take they have to free as much as they can the 360 cpu.
Natal may end with quiet some work on its hand, all together:
generate 3d squeleton and track its motion
2D shape recognition
sound processing
I would be surprised if the system pack an ARM/MIPS tiny/cheap tied to some kind of DSP + tiny amount of ram and rom.
 
:oops: You think Natal is packing that much CPU power?!

Well as to CPU power who's to say. But conventional speech recognition still requires a fair bit of CPU power to recognize and track just one person speaking.

The Natal goes one bit further in that it can recognize and keep track of multiple speakers presumably speaking at the same time. And not only that adds in background noise removal and presumably echo cancellation so that you can use it instead of a headset for voice chat while gaming. I wish the PC had something similar. Using Teamspeak without a headset is begging for some horrendous background bleed that's enough to make your ears bleed as well as everyone else in the Teamspeak channel.

Then additionally it has to record and process 2 distinctly different video streams and then integrate them into something that can represent what it see's in 3 dimensions. And not only that the claim is that it can recognize and differentiate between multiple people. As well tracking items/people/apendages that may pass out of sight of the camera due to being occluded due the movement of other people or the objects.

And on top of all of the above it also has to be able to recognize facial features (enough to distinguish one person from another) as well as track changes to those facials features (smiling/frowning). And likewise do it for each and every person in the field of view.

I dunno, maybe all of that takes virtually no memory or CPU power. But it at least appears as if it would take some non-trivial amount of processing power and memory. Memory BTW - that both consoles are fairly short on.

Considering to do something similar with the Eye Toy + Wands the PS3 would have to do all the work as well as use system memory.

Perhaps that's why it hasn't been demo'd with a game that could possibly stress the system. IE - retrofitting it for use in a conventional game. After all neither a painting application nor the Eye Pet appear to be graphically challenging or memory intensive.

Regards,
SB
 
Would be nice if some sharp minds that knows a lot about the embedded space could give us some hints about what kind of embedded products can be used (cpu, dsp).
(I think of Arun and the other supa high level guys that have a lot of interest and knowledge of this industry segment)
What kind of perfs can be currently achieve at reasonnable/low price.

I'm sure they could give us an idea of what hardware can consist in say MS plan for a bom of 100$
the cost of standard cam is know, the cost of the zcam can be know, them you have the mobo, usb, etc
What can and what guy you want to pack while design a system intended to 2/3d image and sound processing?

Patsu you're indeed right, If one moderator see this it could be more relevant to move them to the natal tech thread.
Sorry for the disturbance and good night every body my bed is calling me :)
 
Last edited by a moderator:
Ha ha, wrong thread though. :)

I don't know how expensive the EyeToy really is though. And from reading the Iwata interview on the genesis of WiiMotionPlus, I think the price of that can be underestimated a little as well!

By the way, someone elsewhere posted this link to a BBC video (which by the way seems to feature Ellie from Eurogamer?) where she gets to test EyePet a little live. I think that if you watch carefully, you can still see why just using the PS Eye is so much more limited than using the Natal camera. It's also definitely not lag-free. Mind you it's still very cool and cute, but if you look at it from a technical point of view and think about the work you have to do as a programmer to get this done right, it's easy to see why this just isn't good enough for more than a few games.

http://news.bbc.co.uk/1/hi/entertainment/8083046.stm

The PS Eye is definitely not lag free. That's why they want to complement it with a true 1-to-1 mapping pointer.

They should continue to work on it though (e.g., Using a transducer to include distance info like Natal, work on finger movement and contact).

By the way, Patsu, when you play operation creature feature (which is a very cool EyeToy game by the way, one of my favorites after Eye of Judgment, from which EyePet obviously takes a cue card ;) ), you'll also see that the software is mostly just sensitive to movement. So a simple trick is to wiggle your hand or even just your fingers a little, and the creatures will stick to your hand like glue.

Yes, Operation: Creature Feature detects motion. In some narrow paths, you'll need more fine grained control to squeeze through (fast !). That's where 2 hands can come in er... handy. On a big screen, we play it with multiple adults too. That was a ball.
 
Back
Top