The non-standard game interfaces discussion thread (move, voice, vitality, etc.)

I'd never even thought of using two nav cons as a split controller, and that goes even further.

Edit: Found an interview at iwaggle http://www.iwaggle3d.com/2012/12/dualplay-officially-unveiled-its.html

All sounds good but I would use the analogue nature of triggers to control both holding and shooting. So you'd be squeezing the triggers to hold on to the guns but press them all the way to shoot. This would give you the sensation of having a gun primed to shoot.

Or maybe if you've holstered your gun, R1 is squeezing your hand/grip, while when you've pulled your gun, it's squeezing the trigger?
 
Or maybe if you've holstered your gun, R1 is squeezing your hand/grip, while when you've pulled your gun, it's squeezing the trigger?

Depends on how they treat picking up other objects. You don't want a disparity between holding a gun and holding say an apple. Otherwise you're training people to make mistakes in the least common situation. If you want travel in holding objects either to squeeze them or that a small object would require a tighter grip, then a trigger is a good fit and more one to one with the action of the hand.

edit: Sorry think I misunderstood. You're making R1 control the trigger finger, and RT control the hand. Yeah that would probably work. Actually that's how they had it working. Hmm I'd have to test it a lot to decide. You should be able to toss a gun if you want. A logical control of hands being as one to one as possible allows all sorts of things.
 
Last edited by a moderator:
Sony has a patent for the next eyetoy to include depth sensing like the kinect. Maybe they'll get ports from Kinect games, and ports from Nintendo too, they'd have the tech that can do everything their competitors do, so 3rd party ports would come their way. Any motion game would be able to target 2 out or 3 consoles. That'd be a much better market than the current situation.
 
MS can easily add a prop to enable everything Sony could do. Plus having a patent doesn't mean having a working, viable product, nor that it doesn't conflict with someone else's patent (despite that being the whole purpose of the patent office :rolleyes:).
 
MS can easily add a prop to enable everything Sony could do. Plus having a patent doesn't mean having a working, viable product, nor that it doesn't conflict with someone else's patent (despite that being the whole purpose of the patent office :rolleyes:).
I'm not sure whether we'll see either, it's just my wishful-thinking-nintendo-and-sony-fanboy side ;)

I agree Microsoft can add a move-like peripheral.... but I doubt it would be "easy", it would be 180 degrees from their position of not having anything in your hands. They must match the current Move low lag and position/orientation accuracy. The orientation precision is the hard part. There's no prior art for what the move does, nor any company licensing anything comparable (as far as I know? Maybe ELF mag field positioning?). The missing link was the 3D compass that gave a vector always 90 degrees from gravity. It's not easy to find a different way to do that without stepping on their patents. Microsoft also has to use visible RGB light to avoid messing the kinect infrared field. Basically they need to implement almost everything Sony did, in a different way where there's no prior art nor a willing licensing company like primesense.

OTOH, Sony has to add the depth channel they already planned years ago. Their method is different from kinect/primesense, and I guess could be more precise but less reliable (I think time-of-flight has reliability issues, unless the source is very powerful). They made the Move with visible light instead of infrared to avoid a conflict with the IR time-of-flight source light. It's not a coincidence, it was SUPPOSED to be there but was too expensive (wanted the eyetoy under $40, the tech wasn't ready, primesense too expensive).
 
For me, the Move control system is completely fine as is, and doesn't need a 3D depth sensor because you get 3D positioning info. with the cam and wands.

Hands free-gaming is overrated, and actually has more uses, interesting application and potential in non-gaming functions like UI navigation, fitness and dance apps, and things like key-hole surgery and controlling remote-controlled robots (which would unfortunately be too expensive to pack-in with a console :devilish:).

For me, the more appealing features of Kinect are things like voice-control (of which Sony could have done with the Move, yet missed a trick by not implementing it. Which is puzzling considering they aready had some stuff in the Singstar games iirc).

For me, the system in the iWaggle video, with the dual nav + wand setup, providing the complete range of controller button input, plus dual analogue for movement, together with accurate and precise 3D motion tracking for both hands with the wands offers the very best of both worlds. Whilst yeah it may look a little cumbersome (remember: its a hack) the flexibility of control and potential for truely innovative game design it affords is absolutely beyond reckoning.

If Sony were to do something like that, i.e. the break-apart DS controller (avec sensors and glowing nads) as seen in the most recent Sony patent posted on NeoGaf, for the default PS4 controller, I would buy the console in a heartbeat. Even if it ended up being weaker than the next xbox.
 
For me, the Move control system is completely fine as is, and doesn't need a 3D depth sensor because you get 3D positioning info. with the cam and wands.
Myself I have a few issues with the Move that I wish they'd improve in a new version:

  • Occlusion: With archery type games, you often put one wand behind the other or your arm, it gets occluded and the computation starts drifting. Could be solved with two cams.
  • Z resolution: the way they do Z distance is with the size of the ball, and it seems to lack resolution, it could be solved by a depth camera. It's an extreme case, but playing Tumble you can feel the Z limitations.
  • Size of the ball: it's in your face, if they had a depth camera they could make it a small point, no need for this size since it's big to allow the cam to evaluate it's size, and compute the Z distance.
  • Color of the ball: It's way too bright when it's used with a projector. They should find a way to use infrared, maybe a dual IR camera (900nm and 1300nm IR) so they can make multiple "colors" of IR.
  • Full body tracking: There's a lot of fun stuff to do, dancing games are a million times better on the Kinect than anywhere else.
  • Face tracking: Necessary for a "virtual window". Eye tracking and facial feature tracking is useful for real time mapping of facial expressions and a precise "look at" in an MMORPG, they're already doing this for Everquest and a webcam, but a better technology would be welcome.
 
Last edited by a moderator:
Myself I have a few issues with the Move that I wish they'd improve in a new version:

  • Face tracking: Necessary for a "virtual window". Eye tracking and facial feature tracking is useful for real time mapping of facial expressions and a precise "look at" in an MMORPG, they're already doing this for Everquest and a webcam, but a better technology would be welcome.
Face tracking on the Kinect uses the RGB camera. The Depth camera has too much noise to do good face tracking. As a result it's less than stellar and only works in high light scenarios, but there's no reason the Move camera couldn't get the same result.
 
I think if sony is going to make people buy a camera, they better have depth in there as well to saty competitive with MS. Depth not only has all the potential aplications MrFox cited, but could have many others we haven't thinked of yet. Body tracking is pretty silly, but is still great for party games or casual - children games, sony should not let itself stay away from these products.
 
  • Z resolution: the way they do Z distance is with the size of the ball, and it seems to lack resolution, it could be solved by a depth camera. It's an extreme case, but playing Tumble you can feel the Z limitations.
  • Size of the ball: it's in your face, if they had a depth camera they could make it a small point, no need for this size since it's big to allow the cam to evaluate it's size, and compute the Z distance.
The ball system is far more accurate than Kinect. You'd need a very good depth system to rival it. I've never experienced Z limitations in Tumble. Movement into/out off the screen is perfectly fine and linear, with the only limitations I've felt being awkwardness when the blocks get stuck on other blocks - probably best solved with a 3D TV!
 
The ball system is far more accurate than Kinect. You'd need a very good depth system to rival it. I've never experienced Z limitations in Tumble. Movement into/out off the screen is perfectly fine and linear, with the only limitations I've felt being awkwardness when the blocks get stuck on other blocks - probably best solved with a 3D TV!
Speaking of that, I have a 3D projector, Tumble was quite an experience, it really feels like you're playing with an imaginary space in front of you :D
So maybe depth cam wouldn't be helping... I do agree it is light years ahead of kinect in Z resolution, but the Move planar position was amazingly instant and precise, while I felt the Z had some smoothing in it, making it feel less precise, because the planar res was so perfect. Maybe I'm imagining things, haven't played for over a year.

I think it's reasonable to expect they'll use the Exmor-R sensor that is already being use in the Action Cam, small, low cost, tiny lens, MUCH more sensitive in darkness, and can also to 1080p@60, and 720p@120. That's a huge step up from the PS-Eye which was 240p@120fps. The exact same Move system would be even more precise, really sub milimeter. and the ball could be proportionally smaller based on the imporved resolution (same number of pixels in diameter, it could be a centimeter ball)
 
If they used two cameras, then they'd have two registered positions of balls and their sizes, to more accurately determine z-depth using their relative distance from each other. They could also then make display of video feed from the camera 3D. And I also think that eventually you'd be able to get 3D body recognition from two normal cameras much like the human brain does. Not easy tho, that's getting into robotics territory.
 
Last edited by a moderator:
From what I remember, Sorcery definitely does use the navcon. I haven't gotten around to trying the game yet.
 
If they used two cameras, then they'd have two registered positions of balls and their sizes, to more accurately determine z-depth using their relative distance from each other. They could also then make display of video feed from the camera 3D. And I also think that eventually you'd be able to get 3D body recognition from two normal cameras much like the human brain does. Not easy tho, that's getting into robotics territory.
Also pricey! The solution needs to be consumer level, especially if supported as standard, which it really needs to be. $50 of camera tech in each box, I can agree to - could explain some of the cost savings in the hardware that are rumoured.
 
If they used two cameras, then they'd have two registered positions of balls and their sizes, to more accurately determine z-depth using their relative distance from each other. They could also then make display of video feed from the camera 3D. And I also think that eventually you'd be able to get 3D body recognition from two normal cameras much like the human brain does. Not easy tho, that's getting into robotics territory.

I agree though that stereo cameras would greatly increase the immersion for Augmented Reality applications. It may not even be the most efficient way to improve perception (I think Kinect's infra-red is a smart solution in principle), but it would be the only really good solution for AR in combination with 3D displays.

I really do hope that Sony can afford to do this - it really shouldn't be that expensive. Also, if they have two Camera's, and they put two microphones in there, that should greatly improve their sound recognition capabilities, which would be a huge advantage for voice recognition applications (as the good stereo separation would help immensely with isolating the correct source to listen to, just like Kinect already has 3 microphones on the left of the camera, and one on the far right, for that same purpose iirc)
 
Yep. You need stereo vision for stereo AR. However, I consider those AR uses in the home so niche as to not be worth pursuing for a console. Best case, you have either EyePet with 3D object interaction, hiding behind table legs, or Kinect style full-body action with you in the game wearing virtual suits of armour etc. For the mainstay, 3D AR really hasn't got much appeal IMO.
 
I could think of a few more applications than that ... But AR wouldn't be the only application - it would also be for stereo imaging (object scanning, auto-avatar applications), body tracking, sound detection, controllerless interfaces, and so on.

Would be great if they could at least get the Bloggie 3D lenses in there. Surely the price of making the lenses has to be relatively small compared to all the rest in these cameras these days? (basically a portable computer with display, processors, output ports, etc.)

But I'm still not even convinced all or any of them dare to include something like this in the standard package rather than making it optional and keep the price of entry low.
 
I could think of a few more applications than that ... But AR wouldn't be the only application - it would also be for stereo imaging (object scanning, auto-avatar applications), body tracking, sound detection, controllerless interfaces, and so on.
A lot of that can be done with a depth camera + optical camera rather than stereo cameras. I believe it's much harder to get depth info from stereo cameras than a depth camera too. Ultimately I'd go with whatever is the cheapest option to get depth perception in the camera solution. I wouldn't go with stereoscopic vision specifically to provide better AR.
 
Back
Top