3D tv with sensor

Creedisking

Newcomer
I'm new to the forums, but this looked like the place to post my question. My goal is to use a camera on my 3D TV or monitor to make a projection like touchscreen in the air. Similar to the concepts they use in the movies with the hologram screens, I realize it'll look nothing close to that but it provides something to relate my idea to.

The problem is I have no idea where to start on the programming, nor equipment. Is there anyone who can shed some light on this for me, or at least point me in the right direction?
 
The hard part of that project would be the object tracking.

A library like OpenCV would save you a lot of time with doing face detection, image segmentation and the things you need to work out where the user is pointing and looking:
http://opencv.willowgarage.com/wiki/

Hardware wise you might find that something like the XBox Kinect camera will be more useful than a straight up webcam/stereo camera feed as it will give you depth and a reasonably good resolution and a low cost... (Kinect can be used on PCs as well as XBox)
 
Thank you, but I'm still not sure as to the coding itself. The languages required, the general guidelines for this. Is there a guide, or tutorial. Or is it a bit less commercialized right now?
 
Probably need to know your programming experience...

If you can program in any language already and are comfortable reading code then that's one thing. You could use a language like C which is very commonly used for applications like this as it's fast and is well suited to low level hardware access. Microsoft's Kinect comes with (afaik) an SDK which will let you use the camera output without needing to worry about many of the low level details of machine vision. There's lots of example code around and samples from hobbyists that you could modify for your task. The caveat being that getting it largely working in cases where the user always faces the camera and doesn't do anything silly like cross their arms should be doable in a sensible time. Making it robust to multiple users, occlusions and decent precision, while being fast enough to avoid annoying lag is a job for a really dedicated coder or a team of decent programmers.

If you're comfortable with what you want and technical ideas but don't have programming... You may be better to pick up some general skills but pay someone else to code it for you. Something like rent-a-coder. You can then focus on the high level decisions and direct the coder to exactly what you want. Just think of the programmer as staff and filling in a gap in your skill set.

If you've no coding skills at all and not sure exactly what you want you're taking on a huge project. It's nowhere near as easy as you might imagine to do this even today. Having a camera and a TV isn't the hard part of having a camera interface to a TV. You may get lucky and be able to find someone who's already done what you require that you can just use as is. Otherwise you're really looking at skilling up - many graduates of computer science degrees pop out the other end of university with enough gaps in their knowledge that your project wouldn't trivial.

Without knowing which category you fall into it's hard to know what resources you require.
 
I have some programming experience (not with C). I'm pretty good in Perl, and above average in Pascal, VB, Java, and basic webcoding (Html, xhtml, dhtml, css, javascript) I'm content with brushing up on C and getting to a point where I can work on this. It's for personal use right now in an empty white room, so that should make it a bit easier than making it usable for everyone.
 
So basically you want to do something like this but with a stereoscopic 3D display?

The source code for their solution is available, adapting it for stereo display shouldn't be too difficult ... adding hand tracking so you can interact with it slightly harder ... but there is code out there for that too.

PS. the effect will only be accurate for a single viewer, and will require calibration for the viewer's interpupillary distance.
 
Last edited by a moderator:
One problem is that to achieve a holographic interface that your hands/fingers can interact with, you're going to have to float stuff out of the screen. You're going to be quite limited as to how far out of the screen you can go with this stuff before it becomes very uncomfortable for viewing. The brain fights out-of-screen 3D effects tenaciously.

There are methods to ease people into it, but I find that just a few inches out is comfortable, and no more. That's sitting about 2 feet away from the monitor.

This means that you are asking people to almost touch the screen every time they do some operation, and it might also cause fatigue as the finger passing through the object gives the brain another thing to fight against.

That gives you another issue. How are you going to track the 'pointer' when it's so close to the screen? You might need to mount it underneath, looking up.

In the end you are making a glorified touchscreen that you don't quite touch. It might still be very cool, and people definitely should be prototyping them, but you should be aware of the general problem of deep out of screen effects being very tiring.
 
You know if you are doing this just because you want to control a pc without getting up from the couch voice control would be easier
 
Back
Top