Old Discussion Thread for all 3 motion controllers

Status
Not open for further replies.
While I was impressed with what Sony did I think it is to little to late. You can't just copy the market leader and expect to win. You have to significantly out do the market leader. Sure the Sony Wand is better than the Wii one. The problem it is not that much better where someone would buy a PS3 instead of a Wii. Natal if it works would out do the Wiimote for the casual gamer. Bundle it with an arcade system next year and you have an interesting battle if Natal works. I would love for Natal to work but I still have serious doubts almost to good to be true. I think Natal is the next step in waggle the question is can MS pull it off. Sony went the safe route MS went for it. It will be fun to see if they pull it off or crash and burn.
 
I got the feeling that the two very nervous engineers (I get that same voice quaver when facing huge crowds of people :)) demoing Sony's motion controller were on the plane to LA last night with their tech demos, having gotten a phone call that morning. Their demos were excellent for showing the capabilities of the system, but were obviously not polished, like they would have been normally (Sony is generally pretty slick on stage). I suspect Sony wasn't originally planning on showing the device now.

:LOL:

I got the same sense!

Hey, they didn't do bad though and at least it didn't crash!
 
AI: "Hi 22psi, what game would you like to play today?"
me: "I feel like some Halo 5"
AI: "Sure let me load it up for you. Also you have some messages waiting. Would you like to view them now or later?"

Stop Dave ... Please stop Dave.
 
I know all about PS3MC, a bunch of friends have been working on it for ages, but I know little to nothing about Natal aside from the video today. But if it can detect where limbs are and depth as well, then it should be adaptable to all types of games including fps. In other words, if it can position your hands in three space, then you should be able to do just about anything with it. It may be dependent on software support for that, but that's Microsoft strong suit anyways.

But you still need a button to shoot.

Or do you think the Natal camera is going to be able to detect small finger moments and manipulations like that precisely. It seems to only track the major limb joints.
 
Here's a blog post by one of the guys working on this tech:

http://procrastineering.blogspot.com/2009/06/project-natal.html

Betan: While a "3d mouse" demo might have been a good tech demo, I think they were out to show casual games, and chose the two demos they used in the time they had allotted for that purpose. For instance, they could have showed Burnout, since it's apparently working. Just because it wasn't shown in the very limited demo during the keynote does not imply that doing it is impossible.

I got the feeling that the two very nervous engineers (I get that same voice quaver when facing huge crowds of people :)) demoing Sony's motion controller were on the plane to LA last night with their tech demos, having gotten a phone call that morning. Their demos were excellent for showing the capabilities of the system, but were obviously not polished, like they would have been normally (Sony is generally pretty slick on stage). I suspect Sony wasn't originally planning on showing the device now.

It's cool that they could put up a quick, perfectly working demo suite overnight though. This tells me that the tech is functional and has been working (for the tested use cases) for some time now.

The motion sensing project has been leaked ahead of time. They may also have prepared the team for a possible E3 demo ahead of time.

Foot tracking, I remember this image from the patent.

us20080261693ki028.jpg


I hope they aren't serious with that :)

Ha ha, the controller shape will obviously change. They'd probably try standard concepts like a PS3 controller, a pair of glove, etc.

Again for tracking "casual" kicks, see the EyeToy video.
 
I think discussion of Milo's AI deserves its own thread.

This would be fantastic. :D

Edit: So back to Natal, I can see the forseeable games utilizing both motion and normal controls. Say using normal controls in an FPS, then coming up to an area where you have to solve a puzzle or turn a wheel by using Natal. Just really cool possibilities.
 
Last edited by a moderator:
This would be fantastic. :D

Edit: So back to Natal, I can see the forseeable games utilizing both motion and normal controls. Say using normal controls in an FPS, then coming up to an area where you have to solve a puzzle or turn a wheel by using Natal. Just really cool possibilities.

Isn't that exactly the kind of Six Axis feature people like to shit all over?
 
I got the feeling that the two very nervous engineers (I get that same voice quaver when facing huge crowds of people :)) demoing Sony's motion controller were on the plane to LA last night with their tech demos, having gotten a phone call that morning. Their demos were excellent for showing the capabilities of the system, but were obviously not polished, like they would have been normally (Sony is generally pretty slick on stage). I suspect Sony wasn't originally planning on showing the device now.

Just poor presentation skills (and yeah, it was raw) shouldn't necessarily indicate lack of preparedness. Let's wait to hear if anyone or any announcements were bumped, like happened with last year's Bungie announcement. Plus, Sony is the 'historically-accurate giant enemy crab' offender, so I'm not sure we can really say that they're so 'slick'.

But I mean, Kudo's own presentation wasn't totally without flaws. :D And we certainly know that MS came prepared.
 
Last edited by a moderator:
To be frank, I am actually scared that the devs have to add so many things with dubious benefits to their games. Some can't even keep their released game stable today. If they want to do this, it _may_ be best to do a game from the ground up for motion sensing.

I shudder to think of FF and GT5 extending their schedules to add motion sensing features. Small beneficial enhancements are probably good and fruitful. May want to wait for guinea pigs for the highly exploratory ones, or let MS and Sony invest their own resources :p.
 
I don't know. They could do some interesting things with facial recognition, voice recognition and motion in NPC interactions and use the controller for the rest of the game.

I guess, but will it be worth diverting that much of your processing budget to trying to detect facial expressions in a game otherwise controller driven? I suspect the lag from the live demos had a lot to do with how processor intensive it is to take a cloud of 3D data points and turn that into usable information. And it's not like games couldn't all support voice recognition right now if they cared to.
 
I guess, but will it be worth diverting that much of your processing budget to trying to detect facial expressions in a game otherwise controller driven? I suspect the lag from the live demos had a lot to do with how processor intensive it is to take a cloud of 3D data points and turn that into usable information. And it's not like games couldn't all support voice recognition right now if they cared to.

Well, the actual Natal unit has a processor on it, so maybe it won't use much.

I'm thinking of games like RPGs, where usually it cuts to an NPC interaction view where it cuts away from the normal controls and you get dialog options and stuff. Those are the situations where I think they'll use those features.
 
Well, the actual Natal unit has a processor on it, so maybe it won't use much.

It sounds to me like the Natal unit has a processor to combine the data from the two cameras to generate the cloud of 3D points, but then passes that data to the host system, along with the video and audio stream, where it's up to the software to decide how to use it. I can't imagine the algorithm that defines the underlying skeletal structure and then tries to detect recognized motions is anything but expensive.
 
Here's a blog post by one of the guys working on this tech:

http://procrastineering.blogspot.com/2009/06/project-natal.html

Oh yay, Johnny Chung Lee is collaborating eh?

I got the feeling that the two very nervous engineers (I get that same voice quaver when facing huge crowds of people :)) demoing Sony's motion controller were on the plane to LA last night with their tech demos, having gotten a phone call that morning. Their demos were excellent for showing the capabilities of the system, but were obviously not polished, like they would have been normally (Sony is generally pretty slick on stage). I suspect Sony wasn't originally planning on showing the device now.

Richard and Anton have been on the stage before (and posted on the Playstation blog) so it's not like they don't have any experience with this. Despite the in-formalness of the demo, it was a little too organized to be a last minute ad-hoc. Rather they were probably told to be ready to demo it several weeks back under contingency that MS would show something off.
 
It sounds to me like the Natal unit has a processor to combine the data from the two cameras to generate the cloud of 3D points, but then passes that data to the host system, along with the video and audio stream, where it's up to the software to decide how to use it. I can't imagine the algorithm that defines the underlying skeletal structure and then tries to detect recognized motions is anything but expensive.

From that blog that was linked it seems like the software algorithms that define the skeletal structure are running on the camera unit. There is some kind of software running with a specialized processor in the unit.
 
How much would one of those body-suit setups used for motion-capture cost?

Would something like that be viable or maybe better than these approaches for collecting motion input? And then rendering that data in real-time?

Presumably, all the reflection points allow key points to be tracked, so that articulation of limbs is detected.

How about a mesh suit that you throw over your clothes, with enough reflection points to track all the limbs?
 

Archie, I have a question for you regarding Sense Me. Kaz's presentation wasn't clear. Is it a server-based solution or a client-based one ? Kaz made it sounds like it's the latter, but I believe most technologies in this area want to be server-based so that the operator can use the aggregated behaviour/preference data for better recommendation.

Sorry for the off topic question, but I don't want to create a separate thread for it (since archie may check this thread again for responses :))
 
Status
Not open for further replies.
Back
Top