oh god, NOT!Lecram25 said:Chalnoth said:But hell, it sure does beat pretty much any UI we've seen in a sci-fi movie/show to date
umm, LCARS pwns j00!1!!!1
oh god, NOT!Lecram25 said:Chalnoth said:But hell, it sure does beat pretty much any UI we've seen in a sci-fi movie/show to date
umm, LCARS pwns j00!1!!!1
Squeak said:The desktop methaphor with overlapping windows is ~ thirty five years old, isn't it about time that better alternatives were considered?
Zooming User Interface.Chalnoth said:I think you're going to have to explain what a ZUI is.
Then combine that with a hand gesture interface for moving windows around and, well, that'd be my dream UI.
For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.Mariner said:Then combine that with a hand gesture interface for moving windows around and, well, that'd be my dream UI.
The only problem here being that you'd look like some kind of vogueing prick whilst using your computer!
Minority Report had some of this as I remember...
Well for one, can you think of just one good reason for having partially overlapping windows?nutball said:Squeak said:The desktop methaphor with overlapping windows is ~ thirty five years old, isn't it about time that better alternatives were considered?
Well, I guess that begs the question -- what's wrong with the window metaphor?
There's a saying "if it ain't broke, don't fix it". Which bit of the windowing metaphor is broken so badly it needs a radical re-design?
Nah, would be much easier to just have a glove-type interface. All you'd need is a series of items on the glove that would be placed in strategic locations, and whose position an external device could accurately determine (the military has been using this technology for some time, but the devices that I've seen to date are a bit too big...though I see no reason why they can't get smaller, and it has been a while, perhaps they already have...).Squeak said:For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.
Nah. Check out Intel's open source computer vision library, and the demos that come with it. 3D object tracking works already surprisingly well with a stereo pair of regular crappy webcams.Squeak said:For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.
Interesting. That sounds quite a lot like what was described in the book Jurrasic Park for the computer interface. I'm just not sure I would really like it for real-world applications.Squeak said:
On the contrary, it would work very well even on handhelds, because if something is to small on the screen you just zoom in.Chalnoth said:The ZUI would be another interface that would probably work best with much larger displays, though.
You could have a legacy interface working in a little pen on the workspace, for older software.I do have a hard time seeing how this would be developed to encompass the entire user interface, though. I mean, we already do have icons, minimized windows, and the like. To really use this approach you'd need new software, not just a new OS interface.
Rather, it may work better for handhelds, but I don't think it would be all that great for more traditional displays.Squeak said:On the contrary, it would work very well even on handhelds, because if something is to small on the screen you just zoom in.Chalnoth said:The ZUI would be another interface that would probably work best with much larger displays, though.
Sounds like you haven’t really understood the basic idea.Chalnoth said:Rather, it may work better for handhelds, but I don't think it would be all that great for more traditional displays.Squeak said:On the contrary, it would work very well even on handhelds, because if something is to small on the screen you just zoom in.Chalnoth said:The ZUI would be another interface that would probably work best with much larger displays, though.
Squeak said:For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.
[url=http://www.beyond3d.com/forum/viewtopic.php?p=229384#229384 said:Archie4oz [/url]on Sony's EyeToy]It just "watches" you twiddle away on a linkless dummy controller and derives the user input from that? I guess that opens the door to a whole range of user input devices that are just dummy devices in reality...
Actually you can now, one of my coworkers did a silly little demo playing marbles... However it's not very precise, so controller input would be immensely tricky. While tracking velocity for rate of input wouldn't be too difficult, measuring pressure would require users to express pressure through a range of motion (it would work for things like gas pedals and such, but require some serious though for things like grip).
[url=http://www.abs-cbnnews.com/NewsStory.aspx?section=INFOTECH&oid=38752 said:Link[/url]]Two summer interns, fresh from seeing the film, decided to mimic that interface on EyeToy. Instead of juggling images or data by gesturing with the sleek black gloves worn by Mr. Cruise’s Detective Anderton, Dr. Marx holds a pair of spongy balls known as “the clams.â€
To grab a photo and drag it across the TV screen, he squeezes a clam and swings his arms through the air. Positioning both hands on the corners of the photo, Dr. Marx clicks the clams, expands the frame and rotates it sideways on screen.