Apple Vision Pro


Some people are returning their Vision Pro headsets for a variety of reasons.
  • Too cumbersome to use for any length of time (size and weight).
  • Resolution of pass through video not as good as it was hyped to be.
    • Basically the hype led people to believe they'd be able to see the outside world much better than what the headset can deliver.
  • Price ... that's obvious, eh?
  • Lack of compelling reasons to keep using the 3.5k USD device.
    • This is an interesting one. If a device costs 500 USD, a person can convince themselves to wait for more applications or experiences to release. For 3.5k USD, it's much harder to justify keeping a device in hopes that the experience will improve.
  • Low FOV feels limiting when trying to use it for what Apple is marketing.
    • For example, having to turn your head to see something to the side versus just moving your eyes to quickly reference something.
A lot of these are common complaints with current and previous gen VR devices (size, weight, resolution, FOV, price, limited experiences).

The headset is without a doubt an improvement in many areas over current and previous gen VR devices, but it does nothing to solve many of the reasons most consumers won't ever use or consider using a current or prevous gen VR/AR device. Size, weight, cost, limited use cases, FOV, to name a few.

Regards,
SB
 
Low FOV feels limiting when trying to use it for what Apple is marketing.
For example, having to turn your head to see something to the side versus just moving your eyes to quickly reference something

That's an interesting argument to say the least. If ~110 of AVP is not enough in an isolation headset I can see 95% of companies concerned about low weight designs quitting overnight for see-through with >21:9.. That likely leaves former stranded and you won't get shrunk designs for a while.
 
That's an interesting argument to say the least. If ~110 of AVP is not enough in an isolation headset I can see 95% of companies concerned about low weight designs quitting overnight for see-through with >21:9.. That likely leaves former stranded and you won't get shrunk designs for a while.

I'd say that it's a big part of why Apple (and Meta) have gone the pass through route. Cameras and reconstruction have a less uncertain path to improvement than see through optics.
 
Pass through is not a good option for AR, but the alternative is not generally a good option either. See through can mostly only project at infinity, there is some tech for projection at depth or always in focus but it generally has to make huge sacrifices in resolution and FoV of the augmentation layer. On top of any potential occlusion layer being very blurry and high attenuation.

Basically, it's all shit for general use. It's for niche applications, though I do think see through has the larger niche.
 
Last edited:
Pass through is not a good option for AR, but the alternative is not generally a good option either. See through can mostly only project at infinity, there is some tech for projection at depth or always in focus but it generally has to make huge sacrifices in resolution and FoV of the augmentation layer. On top of any potential occlusion layer being very blurry and high attenuation.

Basically, it's all shit for general use. It's for niche applications, though I do think see through has the larger niche.

See-through can work at infinity if you look at a laptop or TV sized black surface. Of course it ideally knows the extent of the "screen" so no image spills over, that's pretty general and by then focus at infinity is (huge) advantage.
 
An additional problem with pass through AR, due to the reputational problems of pass through AR (mobilediffusion is close to fast enough to do real time nude deepfakes) Apple doesn't actually allow you to develop pass through AR.

They don't seem to have figured out a way to give third party developers the access for developing AR. It's a very expensive toy, not even a development tool ... just a toy, anti-professional.

Because see through AR generally has more HUD like functionality and is poorly suited to replacing content entirely, the headset manufacturers can actually let you develop professional applications.
 
Last edited:
(mobilediffusion is close to fast enough to do real time nude deepfakes)

Theres only a single correct ipd on sensor side per person and how to capture that is largely unsolved apart from facebook demo-ware that's supposedly bad at distance and barely ok at close distance. Also looking like a 80's b-movie prop doesn't help. Not only it's unsolved they are "on vacation" with event sensor stuff.

This is the lightfield stuff that's taking backseat because gaussian splats. I liked the ultra expensive sensor stuff Lytro showed off before they went under ( dense lightfield covers all kinds of ipd and doubtful you can do that with wearable eg- delivered as streaming format with variable ipd). Should trickle down instead of wearable scaling up. Don't believe in AI doing lightfield stuff any time soon bc. >4k equivalent and either high framerate or panoramic.
 
Also looking like a 80's b-movie prop doesn't help.
It doesn't need to be good, it would just be developed for the lulz by some college kid ... but the reputational risk of AR abuse for sexual or racist realtime deepfaking is near Google glass level.
 
Well, IMO it's more like AI is the new dotcom and there's no good performance target for wearable, so it's some whatever. You can even say this abysmal AI stuff is booming because bad AV quality on AV devices so it's easy to get away with cartoony stuff.

I'd prefer if cinema goers dolby glasses would be replaced by ultra low cost wearable display for stationary viewing and the virtual image is at ~20meter. The oft cited vergence accommodation research paper shows you can get pop out at 2meter from you from a 20meter distant source with similar eye strain than 20 meter virtual distance and current 2m actual distance in VR.

So I don't quite prefer AR because AR needs 50cm without eye strain and that won't mesh with far focus, or it needs to be maxwellian/ focus free.
 
Last edited:
The oft cited vergence accommodation research paper shows you can get pop out at 2meter from you from a 20meter distant source
Misremembered, it's similar eye strain at 1meter from a 20 meter distant source, than >20meter from a 2meter apart source. It shows how desperate they are in AR/VR for close up because >20meter should be rather frequent anyhow.

e8c68ead3a69bcc7.png
 
Misremembered, it's similar eye strain at 1meter from a 20 meter distant source, than >20meter from a 2meter apart source. It shows how desperate they are in AR/VR for close up because >20meter should be rather frequent anyhow.

e8c68ead3a69bcc7.png

Does variable focus fix the vergence issue? There's a fairly good chance of that being in the Quest 4 Pro / Quest 5
 
I'm sceptical, with the current technology of lenses variable focus means adding an extra lens and moving relatively big lenses around in real time to track foveation.

Even exotic light field approaches start sounding realistic then, such as https://creal.com/
 
That was my point, without varifocal you have to choose between "room scale" and far focus. And choosing room scale STILL doesn't get you close up.

And even if you want close up so bad you move the virtual source close ( 2m is standard now), it's not that much better than 20m source that's better in other ways. Except those other ways are flawed in current products because no retina res even on AVP, or even rendering pipeline has serious deficiencies.

So they have no choice. BTW the korean LetinAR is supposed to be maxwellian.
BTW, Gordon Wetzstein of Stanford as said in a presentation that from the perspective of VAC comfort that a Maxwellian display is almost, but not quite, as good as Variable focus.
 
Last edited:
I'm sceptical, with the current technology of lenses variable focus means adding an extra lens and moving relatively big lenses around in real time to track foveation.

Even exotic light field approaches start sounding realistic then, such as https://creal.com/

Doesn't Metas work with half dome 3 already show a non mechanical and compact way forward?
 
A big stack of switchable zone plate lenses doesn't seem a good idea for image quality. Chromatic aberration is going to be woeful, among other things.
 

"We've had a few in a few days, not outside the normal range for new stuff across the entire region," one senior Apple Retail employee that we've been talking to for over a decade who is not authorized to speak on behalf of the company told me on Friday morning. "Maybe like not-pro iPhone levels, proportionately, two weeks after release?"
Other sources inside retail told me that Apple appeared to expect a high return rate, given in-store support documentation on the matter. Still, though, the surge doesn't appear to have happened.

"We've got a checklist we got given to follow on returns, make sure all the pieces are there, the packaging is intact, and that kind of thing," another source at another store told me. "I think I've used it twice this week."
 
A big stack of switchable zone plate lenses doesn't seem a good idea for image quality. Chromatic aberration is going to be woeful, among other things.
Maxwellian display has no extra parts, also used to be called

Virtual retinal display

https://en.wikipedia.org/wiki/Virtual_retinal_display
VRD is a form of maxwellian in the sense it bypasses extra parts and everything is in focus, eg from 20cm to whatever. Otherwise you make a tradeoff, and they don't want to give up close.

AFAICT bleeding edge VAC fix from Meta is some similar waveguide as LetinAR.
 
Back
Top