Apple Vision Pro

LetinAR patent (bold by me):
Furthermore, the conventional devices have a limitation in that a virtual image becomes out of focus when a user changes focal length when gazing at the real world. To overcome this problem, there have been proposed technologies such as a technology using a configuration such as a prism capable of adjusting focal length for a virtual image and a technology for electrically controlling a variable focal lens in response to a change in focal length. However, these technologies also have a problem in that a user needs to perform a separate operation in order to adjust focal length or in that hardware such as a separate processor for controlling focal length and software are required.

In order to overcome the above-described problems of the conventional technologies, the present applicant has developed a device capable of implementing augmented reality by projecting a virtual image onto the retina through the pupil using a reflective unit having a size smaller than that of a human pupil, as described in patent document 1.
 
LetinAR's use of tiny mirrors as virtual pinholes solves a lot of problems of traditional NED, but the multiple pinholes cause their own problems (dropped resolution for one).
 
They have content on yt, afaict it's not a resolution loss but "eye-box" inbetweens. There's stuff on Karl's blog too. This is the closest VRD-s were to market as of yet.
 
NED = near eye display:D
restricting the effective pupil of the NED like a pinhole camera, the virtual image with deep depth of field (DoF) can be presented. The observed image is always in focus regardless of the eye lens power, which helps to mitigate the distance discrepancy. Although this technique makes NED users feel more comfortable, there is a limitation in that the eye-box size is very small due to the restriction of the exit pupil. This interrupts the smooth experience of the AR NED during the rotation of the eyeball or dislocation of the device

They try to solve that, it's the bleeding edge in displays and supposedly there aren't much good engineering going into ordinary displays, luckily.
 
AFAICS they have no scanning mirror on top to aim light from the display into an individual pinhole mirror. If so, each pinhole takes part of the display resolution, the retinal images overlap and a lot of them will be contributing nothing at all at any one time.
 
It's an ordinary waveguide that's also a "spatial filter".

By acting a SF it acts as if the source was a scan mirror with "narrow pencil beam" / thin waist

because initially you only need the scanner to that effect: "By using a narrow pencil beam, Maxwellian displays reduce the effective pupil size and thereby significantly increase the DOF—the image appears in focus at all depths. The ability of creating this all-in-focus image thus provides a simple solution to eliminate the VAC because the eye lens does not need to accommodate the virtual object."

Nature
https://www.nature.com › ... › articles
Computational holographic Maxwellian near-eye display with an expanded eyebox
 
because initially you only need the scanner to that effect:

and you "pay" with eye-box, hence Intel gave up and those North glasses had to be prefitted per person during shopping. Solving that is bleeding edge, just like commercializing varifocal is (esp. electro-optic variant).
 
There is not just "it", there's lots of them and all pumping light through the pupil from the display with overlapping images.

AFAICS it's essentially a refolded version of something like this.
 
The second design has a thickness of 35 mm, 0.3 mm pinhole diameter
A-a, Maxwellian is 2-4mm per pin.
LetinAR is ~4mm per pin according to patent. It's like the North Focal except the beams are fixed in space, so your experience of "hopping" between beams- no inbetween (instead of a single beam of Focals beam steering with small eye-box). That's the issue.
 
Last edited:
The scanning displays with multiple exits can select a single exit AFAIK.

These are all active. That's what makes it a refolded pinhole array display rather than the equivalent of a scanned display.
 
There's no inherent need for active scanned display (again that's 2-4mm thin) if you are willing to accept eye-box shortcoming so scanned mirror is subset of Maxwellian and originally it's this:

Lately it become this :


And they are trying to make second variant work by steering the output (the coupler) according to eye rotation. (Whereas with scan mirror you get 4mm that has also beam profile therefore intensity variation near the limits and that's it - not even good example of Maxwellian).
 
Last edited:
Nvidia's "pinhole array" has 8mm eye-box.

eg. it's the one on the right, vs.
1708699928488.png

Maxwellian has nothing to do with integral imaging, or >4mm eye-box. (And everything to do with tight ray bundles you hardly get out of OLED -in necessary quantities anyhow).

2020-Display-Systems-Research-at-Facebook-Reality-Labs-Still-copy.jpg
 
Last edited:
I'm not talking about Maxwellian, I'm talking about LetinAR. With LetinAR, rays diverge away from the virtual pinhole at the eyeglass. It can not focus towards the pupil.
 
...not only they patented it as maxwellian to begin with, the videos show it as "focus-free" (maybe they went a bit overboard with abutted display panel when it's not just panel but lens and you get miniscule fov out of that. Recently they don't show that but obvious "concave mirror" -inside and "pupil replication" ).
 
...not only they patented it as maxwellian to begin with, the videos show it as "focus-free" (maybe they went a bit overboard with abutted display panel when it's not just panel but lens and you get miniscule fov out of that. Recently they don't show that but obvious "concave mirror" -inside and "pupil replication" ).
Oops, I misunderstood what they did. But then I don't understand how they can aim the eye box, the scanning systems can use tricks to move the focal point of all the rays with the pupil, but here everything is static.
 
Yeah... Except scanning is not inherently letting you do that and if they can steer this 4mm pupil from a non-scan display it's also valid.
 
The only dynamic parameter they have are the pixels, if you try to steer using those you're going to lose pixels.
 
The only dynamic parameter they have are the pixels, if you try to steer using those you're going to lose pixels.
they want the lens to respond to incoming polarized light >4-ways.

Anyway. I doubt you need retinal projection if 99.99% content is monoscopic, why would you move a monoscopic virtual display up close. And see though might not worth the luminance loss accompanying it (practically that's a lot of extra heat). AR is too early, ppl should be excited about displays looking like ordinary polarized glasses and that's that (an untracked HUD in practice).
 
Back
Top