Prophecy2k
Veteran
I jus think that this is genius!
I would love to play a game based around such a control scheme
I would love to play a game based around such a control scheme
My god thats awfulMove steering wheel peripheral in more detail. This actually looks a lot more impressive than initially expected:
http://www.youtube.com/watch?v=e3mtQrcACS0&feature=youtu.be
I'd never even thought of using two nav cons as a split controller, and that goes even further.
Edit: Found an interview at iwaggle http://www.iwaggle3d.com/2012/12/dualplay-officially-unveiled-its.html
All sounds good but I would use the analogue nature of triggers to control both holding and shooting. So you'd be squeezing the triggers to hold on to the guns but press them all the way to shoot. This would give you the sensation of having a gun primed to shoot.
Or maybe if you've holstered your gun, R1 is squeezing your hand/grip, while when you've pulled your gun, it's squeezing the trigger?
I'm not sure whether we'll see either, it's just my wishful-thinking-nintendo-and-sony-fanboy sideMS can easily add a prop to enable everything Sony could do. Plus having a patent doesn't mean having a working, viable product, nor that it doesn't conflict with someone else's patent (despite that being the whole purpose of the patent office ).
Myself I have a few issues with the Move that I wish they'd improve in a new version:For me, the Move control system is completely fine as is, and doesn't need a 3D depth sensor because you get 3D positioning info. with the cam and wands.
Face tracking on the Kinect uses the RGB camera. The Depth camera has too much noise to do good face tracking. As a result it's less than stellar and only works in high light scenarios, but there's no reason the Move camera couldn't get the same result.Myself I have a few issues with the Move that I wish they'd improve in a new version:
- Face tracking: Necessary for a "virtual window". Eye tracking and facial feature tracking is useful for real time mapping of facial expressions and a precise "look at" in an MMORPG, they're already doing this for Everquest and a webcam, but a better technology would be welcome.
The ball system is far more accurate than Kinect. You'd need a very good depth system to rival it. I've never experienced Z limitations in Tumble. Movement into/out off the screen is perfectly fine and linear, with the only limitations I've felt being awkwardness when the blocks get stuck on other blocks - probably best solved with a 3D TV!
- Z resolution: the way they do Z distance is with the size of the ball, and it seems to lack resolution, it could be solved by a depth camera. It's an extreme case, but playing Tumble you can feel the Z limitations.
- Size of the ball: it's in your face, if they had a depth camera they could make it a small point, no need for this size since it's big to allow the cam to evaluate it's size, and compute the Z distance.
Speaking of that, I have a 3D projector, Tumble was quite an experience, it really feels like you're playing with an imaginary space in front of youThe ball system is far more accurate than Kinect. You'd need a very good depth system to rival it. I've never experienced Z limitations in Tumble. Movement into/out off the screen is perfectly fine and linear, with the only limitations I've felt being awkwardness when the blocks get stuck on other blocks - probably best solved with a 3D TV!
Also pricey! The solution needs to be consumer level, especially if supported as standard, which it really needs to be. $50 of camera tech in each box, I can agree to - could explain some of the cost savings in the hardware that are rumoured.If they used two cameras, then they'd have two registered positions of balls and their sizes, to more accurately determine z-depth using their relative distance from each other. They could also then make display of video feed from the camera 3D. And I also think that eventually you'd be able to get 3D body recognition from two normal cameras much like the human brain does. Not easy tho, that's getting into robotics territory.
If they used two cameras, then they'd have two registered positions of balls and their sizes, to more accurately determine z-depth using their relative distance from each other. They could also then make display of video feed from the camera 3D. And I also think that eventually you'd be able to get 3D body recognition from two normal cameras much like the human brain does. Not easy tho, that's getting into robotics territory.
A lot of that can be done with a depth camera + optical camera rather than stereo cameras. I believe it's much harder to get depth info from stereo cameras than a depth camera too. Ultimately I'd go with whatever is the cheapest option to get depth perception in the camera solution. I wouldn't go with stereoscopic vision specifically to provide better AR.I could think of a few more applications than that ... But AR wouldn't be the only application - it would also be for stereo imaging (object scanning, auto-avatar applications), body tracking, sound detection, controllerless interfaces, and so on.