Project Looking Glass screenshots

It looks very gimmicky to me.
Why doesn't anyone try something really new, like doing a commercial ZUI?
The desktop methaphor with overlapping windows is ~ thirty five years old, isn't it about time that better alternatives were considered?
 
Squeak said:
The desktop methaphor with overlapping windows is ~ thirty five years old, isn't it about time that better alternatives were considered?

Well, I guess that begs the question -- what's wrong with the window metaphor?

There's a saying "if it ain't broke, don't fix it". Which bit of the windowing metaphor is broken so badly it needs a radical re-design?

NB. I'm asking this question about the windowing concept, not the individual implementations of windowed interfaces. All the various implementations have their good points and bad points, and that sort of discussion gets very religious very quickly!
 
Btw, I think this user interface would be amazing if my display area was, oh, 10x what it is now. Then combine that with a hand gesture interface for moving windows around and, well, that'd be my dream UI.
 
Then combine that with a hand gesture interface for moving windows around and, well, that'd be my dream UI.

The only problem here being that you'd look like some kind of vogueing prick whilst using your computer! :p

Minority Report had some of this as I remember... ;)
 
Mariner said:
Then combine that with a hand gesture interface for moving windows around and, well, that'd be my dream UI.

The only problem here being that you'd look like some kind of vogueing prick whilst using your computer! :p

Minority Report had some of this as I remember... ;)
For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.
 
nutball said:
Squeak said:
The desktop methaphor with overlapping windows is ~ thirty five years old, isn't it about time that better alternatives were considered?

Well, I guess that begs the question -- what's wrong with the window metaphor?

There's a saying "if it ain't broke, don't fix it". Which bit of the windowing metaphor is broken so badly it needs a radical re-design?
Well for one, can you think of just one good reason for having partially overlapping windows?
Windows are heavily modal too, which is never a good thing.
If care is not taken when creating folders GUIs with windows can get very mazelike and opaque.
 
I wonder if one could use a Theramin-type of interface for hand gestures? Obviously not high res, but should be sufficient for handling a basic GUI.
 
Squeak said:
For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.
Nah, would be much easier to just have a glove-type interface. All you'd need is a series of items on the glove that would be placed in strategic locations, and whose position an external device could accurately determine (the military has been using this technology for some time, but the devices that I've seen to date are a bit too big...though I see no reason why they can't get smaller, and it has been a while, perhaps they already have...).

And as long as the glove is light and doesn't impede movement any, then it would be just an awesome interface. Imagine:
I'm typing this post in one window on the left side of a huge screen. For some reason, I decide I should research something about the next statement I'm going to make. So, I bring my right hand up to the screen, make a "water flicking" gesture (let's say this is Mozilla's gesture for opening a new window, or a gesture I've chosen) at my current Mozilla window. A new window opens. I "grab" the window, and drag it over to the right, so that the two windows are not overlapping. I then point at the address bar and bring my hand down to start typing in the proper web address.

Oh, and btw, yes, I did get this idea from Minority Report. Wouldn't be much use without a huge screen in my opinion, though. Perhaps it'll be a reality in 5-10 years if LCD-type technology becomes much cheaper :)
 
Squeak said:
For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.
Nah. Check out Intel's open source computer vision library, and the demos that come with it. 3D object tracking works already surprisingly well with a stereo pair of regular crappy webcams.
 
Squeak said:
Interesting. That sounds quite a lot like what was described in the book Jurrasic Park for the computer interface. I'm just not sure I would really like it for real-world applications.

The ZUI would be another interface that would probably work best with much larger displays, though. I do have a hard time seeing how this would be developed to encompass the entire user interface, though. I mean, we already do have icons, minimized windows, and the like. To really use this approach you'd need new software, not just a new OS interface.
 
Chalnoth said:
The ZUI would be another interface that would probably work best with much larger displays, though.
On the contrary, it would work very well even on handhelds, because if something is to small on the screen you just zoom in.
I do have a hard time seeing how this would be developed to encompass the entire user interface, though. I mean, we already do have icons, minimized windows, and the like. To really use this approach you'd need new software, not just a new OS interface.
You could have a legacy interface working in a little pen on the workspace, for older software.
 
Saw this at a Sun presentation last month, it's pretty, and very interesting, but is definately a gimmick. However, Sun does have some very interesting hardware coming ...... ;)
 
Squeak said:
Chalnoth said:
The ZUI would be another interface that would probably work best with much larger displays, though.
On the contrary, it would work very well even on handhelds, because if something is to small on the screen you just zoom in.
Rather, it may work better for handhelds, but I don't think it would be all that great for more traditional displays.
 
Chalnoth said:
Squeak said:
Chalnoth said:
The ZUI would be another interface that would probably work best with much larger displays, though.
On the contrary, it would work very well even on handhelds, because if something is to small on the screen you just zoom in.
Rather, it may work better for handhelds, but I don't think it would be all that great for more traditional displays.
Sounds like you haven’t really understood the basic idea.
I can see no reason why it shouldn’t work on either a Jumbotron or a cellphone.
Could you explain why you think it wouldn’t work on a big screen?
 
I don't think it'd work on a normal monitor-sized screen any better than today's technology because today's tech works very well. But it may be a good technique for organization for larger displays, or smaller ones.
 
Squeak said:
For hand gesture recognition to work well you would need some kind of high resolution AI cam. Otherwise it would be way to imprecise, or require to much movement for general use.

[url=http://www.beyond3d.com/forum/viewtopic.php?p=229384#229384 said:
Archie4oz [/url]on Sony's EyeToy]
It just "watches" you twiddle away on a linkless dummy controller and derives the user input from that? I guess that opens the door to a whole range of user input devices that are just dummy devices in reality...


Actually you can now, one of my coworkers did a silly little demo playing marbles... However it's not very precise, so controller input would be immensely tricky. While tracking velocity for rate of input wouldn't be too difficult, measuring pressure would require users to express pressure through a range of motion (it would work for things like gas pedals and such, but require some serious though for things like grip).

I don't care that my first post was totally glanced over, but much of what you guys are talking about is either possible today or will be within 2 years. Chalnoth's suggestion of a glove is a perfect example, nearly that type of accuracy has already been demonstated on a PS2 and $50 EyeToy if you hold two foam balls in your hand:

[url=http://www.abs-cbnnews.com/NewsStory.aspx?section=INFOTECH&oid=38752 said:
Link[/url]]Two summer interns, fresh from seeing the film, decided to mimic that interface on EyeToy. Instead of juggling images or data by gesturing with the sleek black gloves worn by Mr. Cruise’s Detective Anderton, Dr. Marx holds a pair of spongy balls known as “the clams.â€￾

To grab a photo and drag it across the TV screen, he squeezes a clam and swings his arms through the air. Positioning both hands on the corners of the photo, Dr. Marx clicks the clams, expands the frame and rotates it sideways on screen.

And this is with a $50 toy and 4 year old console. I'd assume the PC and PS3 will do a much better job as much of this is computationally limited. It's really unfortunate that outside of Sony, nobody is actively pushing and commercializing such a means of input.
 
I just want an interface that does what I think. No fancy hand waving. No mouse. Of couse if this is even possible we're a long way from such an interface. Besides the obvious difficulty of reading someone's mind the computer AI needs to be intelligent enough to know how to react to the users thoughts.

Of course once I have this interface the computer can present my data with some nice fancy 3D graphics.
 
Back
Top