Sunglasses thin VR/AR (Planar lens)

cheapchips

Veteran
Some Harvard researchers have managed to use 0.002mm titanium dioxide fins to create a totally flat lense. The fins act as wave guide, bending light as it passes through.

I saw this research last year when it was in the non visible spectrum but they've moved it on really quickly.

They're not suggesting any downsides that I could see. If anything, metalenses can be more precise than traditional ones.

Production can use fairly cheap existing lithography techniques. It suggests that VR with a more glasses-like form factor is not that far away. Pretty surprised, as I thought lenses would be the real sticking point for at least a decade.

Actual article, rather than my naff abstract:

https://www.seas.harvard.edu/news/2...pectrum-sees-smaller-than-wavelength-of-light


(The days of camera bumps are also numbered )
 
The "problem" is the massive chromatic dispersion.

It might be possible to interleave the lens arrays and put color filters in front. Then project the image with monochromatic light (ie. lasers). Because the interleaved lens and color filters aren't in focus, this shouldn't screen door too badly.

This was all possible with diffractive optical elements (ie. holograms) already BTW.
 
So apparently Magic Leap uses FLCOS microdisplay and supposedly temporal color.
http://www.businessinsider.com/magic-leap-could-launch-product-in-2017-2016-10

They are basically now using the same technology (LCOS + Waveguides) as Microsoft in the HoloLens.

http://www.kguttag.com/2018/01/03/magic-leap-2017-display-technical-update-part-1/

That isn't to say that things won't change yet again for them in the future as they've been steadily moving away from their original tech. patents (FSD).

Regards,
SB
 
They are basically now using the same technology (LCOS + Waveguides) as Microsoft in the HoloLens.

http://www.kguttag.com/2018/01/03/magic-leap-2017-display-technical-update-part-1/

That isn't to say that things won't change yet again for them in the future as they've been steadily moving away from their original tech. patents (FSD).

Regards,
SB

Hi, it's a necro'd thead, my post is dated oct. 2016....
This is hardly newsworthy stuff at this point in time.

As it turns out FSD is not a display technology it's an inverted endoscope.

Karl's mostly right in his diligence as usual.
 
I think this is more state of the art.

That said, I'm not sure if this is really necessary ... I wonder if you couldn't simply put a color filter across a holographic plate and record the lens for each color, seems to me far cheaper than creating eye glass sized structures with nm resolution. Who cares if holograms are low efficiency, you're projection on a tiny surface, don't need a lot of energy even with losses.
 
Now VR is about wide angle orthostereo, it's the "LeepVR" premise:
http://www.leepvr.com/spie1990.php
,& that means also wide angle vertical FOV, >90° .
There is no way back for VR , as it won't take a U turn towards form over function, and >90 VFOV in see through AR is sort of hard (they rather want multiple depth planes), so the way VR/AR gets thrown around is comical, as it's not a really good prefix for AR , and AR isn't a good prefix for VR either.
 
If you can make a flat lens, I don't see why AR would be hard. Just screendoor the VR lens (and obviously take the display out of the line of sight).
 
AR is about correct depth cues below 2meters , in VR on the other hand they focus over 2m ,

at least 4 depth planes are needed in a AR multifocal display to qualify as a "depth fusing display" , as it stands Magic Leap One won't quite make it, then it has temporal depth planes (2) , can be tricky because of framerate needed to prevent flicker (if it can be prevented at all)-

non-temporal depth planes need something like , 32MP display (4K x4) per eye for wide HFOV , and that's not counting wide VFOV. That's an already large and expensive set of displays without the subpixel color (and that's going to limit minimum pixel size a lot) .
 
Didn't think about that. I wonder if you could find a way to use the electrowetting lenses to adjust focus fast enough to use a single display. Have the electrowetting lens cycles through the focus range at 120 Hz, with a special monochrome masking display to only let through light at the right depth.

There's also various ways to project images always in focus on the retina of course. All that's old is new again.
 
Didn't think about that. I wonder if you could find a way to use the electrowetting lenses to adjust focus fast enough to use a single display.

Multifocal can use the same or nearly as good lightpath as an ordinary single plane counterpart, a seemingly insurmountable advantage.
Besides, it makes most sense to use a varifocal element below 2 meters and a fixed plane over that, so it's a subtype of multifocal most likely .
Yet there are 3 main electronic varifocal types, it's all in research stage , and bulky:

-liquid lens - slow even when using high voltages (10-30hz range), electronic varifocal glasses aren't yet sunglasses thin, they want to bring this to the market for a while now in various forms
-LCOS phase modulator- slow , but ordinary voltage range ( ~ 30hz) , it acts like and takes as much space as a panel display . They use this at Oculus in a desk sized test mule
-mechanical varifocal element- fast ~1khz range, relatively high voltage , probably LCOS x10 at least, "needs space" is an understatement , needed a whole compartment last time I read about it.
 
Last edited:
There's also various ways to project images always in focus on the retina of course. All that's old is new again.

The solution proposed by the "VRD team" was a 10mm diameter mechanical element with hundreds of volts of drive voltage. It can't keep up with the beam , so it's a per frame varifocal:


A retinal scanning display system that produces multiple focal planes with a deformable membrane mirror
...
Previously we have described the virtual retinal display (VRD) developed at the University of Washington Human Interface Technology Lab [11,12]. The VRD scans lowpower laser light into the pupil in an x–y raster pattern. While the beam is scanned, the laser modulation ‘paints’ a picture onto the retina. The optics of the eye focus the beam onto the retina and under ideal situations can produce a diffraction limited spot on the retina. The VRD, when integrated with a MEMS deformable membrane mirror, can vary the divergence of light being scanned onto the retina to produce various focus planes. Thus, a binocular VRD incorporating a deformable mirror allows normal coupling between accommodation and vergence and can generate realistic blur cues for objects at different distances in the 3D scene. In this paper we describe the conversion of the VRD from a fixed focal-plane display to a variable focus display.
....

In practice the sinusoidal scan artifacts and limited refresh rate won't make this a worthwhile display tech., there's also the issue of color perception vs. narrow RGB primaries.
 
There's more than one way to skin a cat, I don't really see the need for the classical scanned laser VRD. AFAICS something like this should be equally possible for always in focus projection.

Ideally it would also have an autorefractometer ... but that would probably be quite complex, I assume the only accurate reading of lens accommodation is near the fovea.
 
The patent seems to specify a 'point source' display ,but no example of one that can support wide FOV , the one mentioned (with mechanical steering) currently have modulation rates only up to ~150mhz per color so they're at FHD 60hz. But in practice it's much worse because of sinusoidal scanning necessitated by the resonant drive of micromirrors ( inertia) . I'm not sure even some sort of hypothetical field emission display without inertia can keep up with this demand, eg. 600mhz modulation rate.

Per pixel or even multipixel level focus capability would be sort of good , presently it takes up a whole desk at Oculus , in their "Focal Surface Display" . Afaik noone does that with a 'point source' . Sci-fi.
 
Well from a large enough distance sure, a virtual "point source" .
At close distance LCD would become planar/ plane source, the Grating Light Valve licensed by Sony is a line source. In the present LCD-s have scattering / diffuse backlights so can't even become plane source.

The compound modulation rate in line source can be very high , it's still used in printing as far I know, maybe even UV lithography. It's a good concept- line array doing the heavy lifting instead of resonant scanning pushed to the limits.

Point source could be ideal to do per pixel focus in small size , but I don't have the slightest idea how to pair it with a nonmechanical scanner .
 
It can be focused down to a point after reflecting off or transmitting through the display, that's the advantage of laser light.
 
Back
Top