Old Discussion Thread for all 3 motion controllers

Status
Not open for further replies.
I think it would be quicker to dismiss some hand on comments about Natal and accept its doom and gloom future...
 
It can augment normal controllers very well, unlike the waggles. E.g. you play a team-based shooter with the dualstick controller, you tell your teammates "regroup over there" over voice comm, and point at the screen for a second with your hand; your teammate sees a flashing light marker in the game world. You can activate star power by throwing the horns or headbanging. You can cast a few predefined spells with several simple gestures etc.

How would it know where you're pointing? Calibration before-hand?
 
2ylljrs.gif

:LOL:
A Proven Stevie Wonder Accuracy, eventually you will hit something.
 
Did anyone got the feeling that after Kutaragi left, Kaz decided to shelve the development of PS eye (along with other creative solutions), that made Phil to resign, but once Kaz got wind of Microsoft developing a motion "controller", got nervous and decided to re-route funds to Richard Mark's division? That could be why i felt Marks presentation this time was a little nervy and even raw, considering the time gap between the E3 when he and Phil first showed the PS eye.

While i am more impressed by Microsoft presentation of their motion "controller", i have more faith in Sony's to be the better performing retail product, there is a feeling inside me that Microsoft "controller" is a little too "beautified". I have to see both sure, but i am inching towards the PS eye. As what the above posters have noted, the Sony mc appears to be more suited to the gaming crowd. Wasn't there a rumor of a break away mention sensing DS3?
 
Here are some of my first thoughts on the three controllers (for the technological background also see the OP posts of the individual threads for these controllers in the Console Technology forum):

I think that generally, the Microsoft device is more impressive. It apparently generates a 3D field much like those laser tools that can 3D scan an environment. That allows for a great deal of cool things to be done with it. Also, it can apparently 'focus' on certain objects. So it's precision capabilities can be used to scan the whole room with a certain amount of 3d points scanned, and then the next time you can tell the software to focus on the area where you located hands for instance and scan them in more detail. Now scanning the whole field of vision of the camera apparently takes 5 frames, and then focussing on something in particular can take more or less time, I'm sure.

From the demonstrations its clear that they can take that 3d field and recognise and map objects, like a human body simplified skeleton (not sure how detailed they can get in practice straight up). They can focus on a face after some facial recognition and read emotions. And they can really scan in objects you hold up to the camera, which is pretty cool also. They can make a 3d map, and then read the actual RGB light to texture or paint the resulting 3d model - this was demonstrated a little already in the demo with the skateboard. The potential is pretty darn huge and awesome.

However, there's going to be a bit of lag - if the device needs 5 frames to scan the whole area, then there's more time needed for interpreting the resulting data. I don't know how fast they can do that, and if this analysis can be done on the camera (would make it a little more expensive) or whether the data is sent to the 360 for analysis there, but this could add some more frames. Let's be optimistic and say that they can keep it within 7 frames. If we assume 60 frames per second, then we're talking about 117ms of lag.

This is probably the main risk for some types of games, but doesn't have to be a real problem, and people's impressions of the live thing seem positive. A second potential problem is multi-player - online, the additional lag could become annoying. Also, offline, multi-player could make it harder to analyse the different players and is definitely going to decrease the available resolution that the players can be scanned with.

The Sony motion controller discussed here is, apart from obtaining the 1:1 motion control, focussed primarily on reducing lag to a minimum. I wouldn't be surprised if response is practically instantaneous and lag-free - think well within a frame. Also, gesture recognition processing is probably something that can run on a fraction of an SPU, maybe even the system reserved or shared one, and isn't going to have any noticeable performance impact on games.

There's an interesting additional advantage to Sony's approach though - it's very likely to be nearly 100% compatible with WiiMotionPlus. While this was a thought I've had earlier, just today I noticed an article on how the developer that provides the library for motion control on the Wii has already released a version for the PS3's controller, which includes the ability to simply 'record' a gesture that can then be recognised by the library and connected to a function. This means that developers who've invested time into developing a game for the Wii and particularly WiiMotePlus can very, very easily also release this for the Playstation (not to mention vice versa of course, but with WiiMotionPlus out now ... ).

This is particularly important because none of these technologies are included with the console by default, so that the initial market is going to be very, very small. In that respect, Microsoft's camera may be going up against WiiMotionPlus and PS3MC combined. In reality this may not be as strict - I can imagine that there will be applications where the camera will be 'compatible' with what the PS3 and WiiMote is doing - MS could even release empty sticks or sticks with just rumble that you hold and that the camera focusses on exclusively, but still it's going to be interesting. I can definitely see the camera as default control method for the next-gen though ... the question is how successful WiiMotionPlus and PS3MC (which by the way should be more precise and less laggy than WiiMotionPlus) will be in the meantime.
 
Good read mate :)

My thoughts are that the Sony solution offers the advantages of both the other solutions in one package.

People say the PSMC is a Wiimote+, but it looks to be better with less light and line-of-sight issues. It's also really accurate. But in addition you have the camera functionality, so we have a bit of the MS solution in there...a bit of 'best of both worlds'.

I need to look at more MS footage (all I've seen so far is Milo) but so far I'm not 'blown away' - I've seen similar stuff (and promises) from eyetoy so excuse my negativity - but the footage I've seen looks staged and nothing that couldn't have been reproduced on the eyetoy - in fact I thought the bit where Claire (IIRC) rippled the water looked very suspect.

Either way think again the Sony offers similar features but with added button functionality - how would the MS solution work in an FPS? would you point your finger? How would you shoot? I don't think it could be any better than the Wiimote (without 1:1) and not enough better than eyetoy - I find the Wiimote really frustratingly inconsistent - like the eyetoy...too often subtle things are missed and you end up overcompensating whereas the PSMC seemed very accurate...and has buttons!

Just my thoughts. I must try to find the other MS demos.
 
I agree with this. Sony's system is more (and probably better) geared towards games. Microsoft's is intended to change the way one interfaces with consumer electronics: browsing through menus, playing video's or playing simple games. It is intended to enable those to who are alienated by even the simplest controller to be able to use such machines (and hence become a customer).

Yes, that should be the goal for a "no controller" interface. I read somewhere that MS claimed the device is sub-$100 (comfortable to be released with a $69 game).

Perhaps they don't need a HD camera or true 1-to-1 mapping. Start with a cheap, ,robust, SD solution first; and then upgrade to HD later. It think they may be able to detect certain types of finger movements even with only an SD camera and a high power IR.

Did anyone got the feeling that after Kutaragi left, Kaz decided to shelve the development of PS eye (along with other creative solutions), that made Phil to resign, but once Kaz got wind of Microsoft developing a motion "controller", got nervous and decided to re-route funds to Richard Mark's division? That could be why i felt Marks presentation this time was a little nervy and even raw, considering the time gap between the E3 when he and Phil first showed the PS eye.

Nah... Kaz admitted that augmented reality is his personal interest during one of the earlier interviews.

I think that generally, the Microsoft device is more impressive. It apparently generates a 3D field much like those laser tools that can 3D scan an environment. That allows for a great deal of cool things to be done with it. Also, it can apparently 'focus' on certain objects. So it's precision capabilities can be used to scan the whole room with a certain amount of 3d points scanned, and then the next time you can tell the software to focus on the area where you located hands for instance and scan them in more detail. Now scanning the whole field of vision of the camera apparently takes 5 frames, and then focussing on so

Generating a 3D map of the environment is rad ! Would be great if we know what the resolution is. That will tell us what type of applications are suitable (since we know some basic parameters for the lag/detection time).

And don't forget, Microsoft sent out SDKs to publishers - this is much further along than some people seem to think.

The 3DV camera was ready to go to market, so it should be usable and robust. It's the Milo concept video (especially the speech recognition part) that gave people the impression that the thing is still far away.
 
My thoughts:

I was initially most blown away by the Microsoft solution, however, I cannot see how applicable it will be in replacing the controller for current games. The more I look at the Sony solution, I can't help but find it the more practical for current gaming. I certainly hope they will include an analog joystiq on it or else it will be more limited in its ability to replace current joysticks.

I wonder if the MS solution is mostly implmented in software, if so, since the PSEye uses PS3 OS could it not be updated to improve accuracy on the software side of things? This 3d map that the MS solution creates, is that not done in software?
 
I got the feeling that the two very nervous engineers (I get that same voice quaver when facing huge crowds of people :)) demoing Sony's motion controller were on the plane to LA last night with their tech demos, having gotten a phone call that morning. Their demos were excellent for showing the capabilities of the system, but were obviously not polished, like they would have been normally (Sony is generally pretty slick on stage). I suspect Sony wasn't originally planning on showing the device now.
I have this feeling too, things doesn't add up at all. On one side Joker hints that the stuff have been working on for a while on the other side the demo shown are completely lacking lustre.
The tech works properly and its precision is unmatched but the thing still looks like a weird prototype. I wonder if Sony was undecided about showing it it before MS conf.
 
Looking at the video again, the sword is 1:1 in Red Steel 2 before the player slashes with it and its not as if the graphics change when you slash at someone. Personally I think it probably just comes down to how the developers wanted the sword play to work.

If you look at Wii Sports Resort having the sword totally 1:1 ends up looking quite clumbsy at times due to most people not really being very good with swords :) Maybe they just wanted to make sure the sword fighting looked good by giving the player full control of where to hit but using an animation for the actual slash.

I'm sure IGN will ask this question when they do their Red Steel 2 interview so no doubt we'll find out then.

You don't really need 1:1 mapping in sword fighting especially since blocking by the AI can be very problematic and cause the need for constant breaking of such precision mapping to keep game play fluid.
 
I have this feeling too, things doesn't add up at all. On one side Joker hints that the stuff have been working on for a while on the other side the demo shown are completely lacking lustre.
The tech works properly and its precision is unmatched but the thing still looks like a weird prototype. I wonder if Sony was undecided about showing it it before MS conf.

As archie4oz pointed out, they were prepared to demo but the confirmation came late. 3DV camera was (close to) going to market, so they should be very ready. That's why I was surpised to hear a 2010 release date, instead of a 2009 one. My sense is MS will push to release it in 2009 with partial functionality.

The Sony demoes are actually pretty refined given that they work very well. It's the packaging that s*cks.

Remember the Batarang ? They'd need some time to finalize the shape.

You don't really need 1:1 mapping in sword fighting especially since blocking by the AI can be very problematic and cause the need for constant breaking of such precision mapping to keep game play fluid.

I think they need 1:1 mapping for direct manipulation of virtual objects as if you're there (e.g., writing, fine-grained targeting, constructing virtual objects) so that the brain won't get confused. Large movement like sword swing should be ok.

I'm sure companies like Sony is also looking into ultrasonic 3D imaging. This Christmas is going to be fun.
 
As archie4oz pointed out, they were prepared to demo but the confirmation came late. 3DV camera was (close to) going to market, so they should be very ready. That's why I was surpised to hear a 2010 release date, instead of a 2009 one. My sense is MS will push to release it in 2009 with partial functionality.

My understanding is that Natal (not the final name) will not be released this year, and may be released late next year.

Can't recall where I picked that up from, so salt++ as needed.
 
The tech demo showed 1:1 tracking/rendering but how likely are developers to implement it?

For instance, in the PS demo when he shot that bow "gangster-style" was that a gesture recognition or was it true 1:1? Could he have shot it 3/4 angle, that is hold the bow somewhere between 0 and 90 degrees to the horizon?

Sports games developers like EA has invested a lot in motion-capture. Would they really drop all those animations of signature styles meant to represent famous athletes in favor of 1:1?

That is, will they let users, most of them with horrible golf swings, make the virtual Tiger Woods swing like an amateur?

Or in their new tennis game, will they let Nadal hit one-handed backhands all the time and have Federer play lefty or two-handed backhands if that's what the user is doing?

Or if a user just can't get the serving form down, will he or she double-fault all the time?

More than likely, even Motionplus and these other controllers will use gesture-recognition to trigger canned animations.

So it seems there will always be a gap between what's possible on paper and what developers trying to sell games do.
 
My thoughts:

I was initially most blown away by the Microsoft solution, however, I cannot see how applicable it will be in replacing the controller for current games. The more I look at the Sony solution, I can't help but find it the more practical for current gaming. I certainly hope they will include an analog joystiq on it or else it will be more limited in its ability to replace current joysticks.
It's a given.
I wonder if the MS solution is mostly implemented in software, if so, since the PSEye uses PS3 OS could it not be updated to improve accuracy on the software side of things? This 3 map that the MS solution creates, is that not done in software?
One of my concern is on the software side of thing, whatever the the I think Ms is head and shoulders above Sony in this regard.
For MS solution we know really few, in fact we've just learnt today that it's was based on 3DV tech. Clearly Ms have been working a lot on the software layer but how Natal accelerate this software is in the dark. It could be some cheap CPU from the embedded space + a custom dsp + some RAM/ROM but it doesn't say much.
On the PS Eye side of thing educated member here hinted that the possibilities of the ps eye are mostly unexplored which makes me think that MS has significant advance on the software side of thing (hardware side outside of the cameras is not that relevant as cell as processing power in spare where as xenon could not afford too the extra work).
 
My understanding is that Natal (not the final name) will not be released this year, and may be released late next year.

Can't recall where I picked that up from, so salt++ as needed.

*If* this is the case, it may be more because the developers and the logistics need time (e.g., price needs to drop further). Technically speaking, a partial Natal (just the original 3DV functionality) should work.

The tech demo showed 1:1 tracking/rendering but how likely are developers to implement it?

Depends on whether they can come up with an interesting application that sells. In similar vein, a precise 3D mouse may work for PS3 too. It's useful for web browsing and most importantly, not tiring to use.

Both approaches are useful, but the "no controller" market looks wide open now. I'd say now that the secret is out of the bag since a few months ago, you can bet that several companies are already trying to do this.
 
I think they need 1:1 mapping for direct manipulation of virtual objects as if you're there (e.g., writing, fine-grained targeting, constructing virtual objects) so that the brain won't get confused. Large movement like sword swing should be ok.

I'm sure companies like Sony is also looking into ultrasonic 3D imaging. This Christmas is going to be fun.

Actually your brain is not easily trick or confused. Your brain does just fine manipulating virtual objects with a few buttons and a couple of analog sticks. Adding in motion detection doesn't suddenly force a more stringent set of requirements when controls become more natural and mimicks the action of the user. Response time is more important than precision mapping.
 
Status
Not open for further replies.
Back
Top