It doesn't help that LEAP is demonstrated in a way that most people won't be comfortable doing for long periods of time. Trying typing on your keyboard and mouse without resting your forearms or elbows on your desk and see how long you can comfortable work at your computer before you feel muscle stress and fatigue.
Im pretty sure that's only to showcase it's accuracy and multifinger tracking. Typing in mid air is a stupid idea and I'm sure the Leap developers aren't that clueless.
it's the primary reason light pens were never successful in the UI space.
Light-pens have the hand at an awkward position. Just bending the wrist up that way and moving, it's completely wrong ergonomically. Shift the light pen to a stylus on a touch screen held more horizontally and people can use it all day long, and not because the entire weight of the arm is resting on the stylus nib.
Leap is more like a magician, or someone stacking shelves in a supermarket, or a casino croupier, all who use their arms and hands all day long and don't pass out. Our arms aren't so atrophied that we are incapable of supporting their weight for any length of time. Even while typing on a keyboard, my arms aren't supported at all. As long as I can move them around, I can keep active all day long.
As an additional tech to computers, this is everything Kinect is being celebrated for. 3 dimensional interaction is a very good addition. Being able to reach into the workspace and select or interact with elements is definitely going to be a productivity improver. Something like having a KB+M for fine control, but then being able to select and access elements with the hands freely, like reaching behind the open window to slect the window behind. Or, for the 3D modellers who this was invented for, select the object behind. Then grab and spin the world-view. Then get you mouse and select a specific vertex for changing. Then select a brush tool and draw with your hand freely on the virtual model in the Leap space. For close work like that's, I'm not seeing an equal. In the console space though, though not seeing a use really. It feels to me about close-quarters interaction, designed for the space just in front of the user, and thus wanting a close proximity with the visual feedback. The idea of virtually modelling an object a foot in front of me that's visible 6 feet away seems a little too detached. Maybe it'd feel fine after a little experience, but I'd rather the display was close so I could use my hands close to it.