Sony VR Headset/Project Morpheus/PlayStation VR

You can't voluntarily look at the blurred edge of a screen in a 2D movie?

That is a new one to me. I had no idea. I must have been watching movies wrong for many years.

No matter what happens on screen there's no stimuli that would cause the brain to try to focus any differently.
Nope.

If your eyes are open, there are always stimuli helping direct your attention. This is a function of your brainstem, an area called the superior colliculi. Near the inferior colliculi involved with audio processing :)
This is where headaches happen, because it's a reflex and it's a feedback loop between your eyes and your brain,
Is that an expert description? :p

"A reflex loop?" Similarly, a computer is a "collection of transistors."
which is severely imbalanced with close 3D objects (or Avatar-like crossed background which was extremely stupid).
That can be true, but again, is this an issue with the technology itself or how it was used? There seems to be a lot of conjecture in this thread about "why 3D is poor and failed so many times" attributed to technical aspects, and ignoring everything else.

There is something you are right about, and that is the *distance* to the screen being a very important factor and convergence changing. 2D movies are easier on the eyes, because at distance not much farther than several centimeters, binary vision does not really contribute to any 3D effect. Beyond this very close distance, it is primarily parallax that contributes to our intepretation of depth. So when you watch your regular TV at home, it's not much of a bother.

With 3D, that distance is brought much closer towards the eyes.

But we were not talking about that, we were talking about the quality vs. artistic aspect of a 3D movie. A "blurred tap in the foreground" can be blurred or not blurred and how much so to the director's desire. Whether or not that effect is "realistic" or not isn't really a discussion about whether or not 3D technology is *good* but rather if the director's choice of focus and focus blur is to the viewer's taste or not, for any old reason. In another movie, the same technology could be used to make a more "realistic" foreground object. In that way.... like a 2D movie. The degree of blurriness on a foreground object is not a measure of the quality of the technology, but a comment on the director's choice more likely. That is what I was saying.
 
Last edited by a moderator:
Is that then a discussion about the quality of the 3D effect, or the quality of the director's use of the 3D effect?
Mostly the latter, but the constraints of the tech mean good 3D in a movie is extremely difficult, maybe impossible, to pull off without settling into a very narrow subset of cinematographic choices.

2D movies will do the same thing, it's just we're "culturally accustomed" to having 2D movies decide where our focus is.
No, 2D movies are a flat wall that we can focus wherever we want at with the image on the flat wall operating exactly as it should according to the rules of our nature.
You can't voluntarily look at the blurred edge of a screen in a 2D movie?
When you look at it, your brain focusses at that distance and perceives a flat wall of blurred light. In a 3D movie, the focal length is baked into the image independent of the actual focal length of the stereoscopic image. If you focus on the object at the cinema focal length (15m in front of you, say), it should be in focus as far as your brain's concerned. Instead it sees a fuzzy blob hovering at a distance 5 metres in front of the screen. There's nothing natural about that.

That is a new one to me. I had no idea. I must have been watching movies wrong for many years.
Sarcasm doesn't help with sensible discussions.
You can very much trick a brain into perceiving a 2D image as a 3D one, particularly while using parallax effects with an image very close to the eyes.
That's something else. That's the brain interpreting 3D information from a 2D source that operates exactly as a 2D source should. The moment you force 3D on a 2D screen, the brain has conflicting information which is where we get numerous reports of headaches and nausea from users. Unless one either concludes all these people are lying, or they also have headaches and nausea with 2D, it's quite obvious that forced stereoscopic 3D isn't working properly.

So you described why the blind spot phenomena is not only the about the eye, if it was only about the eye, there would be no "filling in of detail" which is really the brain guessing what is there, not as you say... filling in detail ;)
The spot is blind because you cannot see. That's nothing to do with the brain. The fact it's not a black spot is where the brain is involved, but the brain itself isn't in any way involved in creating the blind spot. Take the nerve endings from an eye and pipe them into a computer, it'd have a blind spot. And a shit-load of blood and crap all over the upside-down, blurry image!

In part because the viewer does not choose the point of focus in the movie, 2D or 3D...
What has that got to do with my point about there being very little DOF in the human vision? Can I read the rest of this paragraph without directly looking? No. Can I determine that there are actually individual letters in it though? Yes (or at least, words or glyphs). Photographic DOF blurs these completely out for artistic effect, which is something people never experience in RL. Even the most extreme DOF, focussing at infinity at placing an object in the line of sight at a few cms away, is only a few degrees of blur. Photographic DOF exceeds this by an order of magnitude at times.

I've seen many movies in 2D which do not conform to real-life perspective either.
But the display does. Every point on the display has a focal distance exactly equal to the eye's natural focal range, because it's a real-life surface. The end result is a picture perceived as a picture. 3D presents a picture that the brain tries to perceive as a 3D space, but the 3D space presented doesn't conform to the eye+brain's rules for spacial and optical interpretation, which causes the strain that's commonly associated with 3D.

If you found said effect unpleasant or unrealistic in a particular scene, then your qualm may not entirely be laid against the technology, but also in equal parts against the director's cinematic choices maybe?
Yes, it's because the cinematography doesn't work, but the only cinematography that would work would be a fixed 50 mm focal length and f2 - f8 for every single shot. Or at least preserving the natural depth when using zoom lenses, which should reduce depth perception/stereoscopic separation (which is why binoculars don't cause headaches). The current problem is 3D authoring looks on 3D as having a wow factor, and it's being applied based on the wow factor instead of what works. And ironically, the moment you reduce the wow factor to make it more comfortable, it stops having an great impact on the experience and you're left with an AV story.
 
Mostly the latter, but the constraints of the tech mean good 3D in a movie is extremely difficult, maybe impossible, to pull off without settling into a very narrow subset of cinematographic choices.
Then we are in agreement :)
The spot is blind because you cannot see. That's nothing to do with the brain. The fact it's not a black spot is where the brain is involved, but the brain itself isn't in any way involved in creating the blind spot. Take the nerve endings from an eye and pipe them into a computer, it'd have a blind spot. And a shit-load of blood and crap all over the upside-down, blurry image!
But the phenomenon of the blind spot is not complete without the brain, it wouldn't be so interesting if everyone knew about it ;)
That's something else. That's the brain interpreting 3D information from a 2D source that operates exactly as a 2D source should. The moment you force 3D on a 2D screen, the brain has conflicting information which is where we get numerous reports of headaches and nausea from users. Unless one either concludes all these people are lying, or they also have headaches and nausea with 2D, it's quite obvious that forced stereoscopic 3D isn't working properly.
Is it that it is not working properly?

Or is it working exactly as it should and this is the side effect of using it for some/most people? The 3D is not "natural" as you guys are saying.

Two very different things.

The problem with 3D today is that it is not 3D at all. It's two 2D images. That's the part that's not natural. Not the degree of blur, or other special effects, or anything else. It's just not actually 3D.

I'm just trying to get you guys to the correct conclusion of "why" there are these problems, and what is the actual culprit.
What has that got to do with my point about there being very little DOF in the human vision? Can I read the rest of this paragraph without directly looking? No. Can I determine that there are actually individual letters in it though? Yes. Photographic DOF blurs these completely out for artistic effect, which is something people never experience in RL. Even the most extreme DOF, focussing at infinity at placing an object in the line of sight at a few cms away, is only a few degrees of blur. Photographic DOF exceeds this by an order of magnitude at times.
That's your brain doing most of that actually, it's assuming almost all of it. The peripheral vision acuity is very, very poor. Especially if you are familiar with your surroundings (it makes a difference). You may feel it is sharper than it really is, but I can assure you it is not.

Again, put your finger close to your eye while focusing on the computer screen. You can a very nice and strong blur. The eye is not that much different from a camera, it still uses a lens an obeys all the laws a simple camera does.
But the display does. Every point on the display has a focal distance exactly equal to the eye's natural focal range, because it's a real-life surface. The end result is a picture perceived as a picture. 3D presents a picture that the brain tries to perceive as a 3D space, but the 3D space presented doesn't confirm the eye+brain's rules for spacial and optical interpretation, which causes the strain that's commonly associated with 3D.
That's true. MrFox has made the most accurate comment though. It is the convergence at a very short distance that causes problems. And that's because 3D today is not 3D.
I explained this. Yes, it's because the cinematography doesn't work, but the only cinematography that would work would be a fixed 50 mm focal length and f2 - f8 for every single shot. Or at least preserving the natural depth when using zoom lenses, which should reduce depth perception/stereoscopic separation (which is why binoculars don't cause headaches). The current problem is 3D authoring looks on 3D as having a wow factor, and it's being applied based on the wow factor instead of what works. And ironically, the moment you reduce the wow factor to make it more comfortable, it stops having an great impact on the experience and you're left with an AV story.
I agree, but I attribute that to the directors, not the technology itself.
 
http://www.hdhead.com/?p=280
Vergence/Accommodation Conflict

Can make head hurt.

Vergence/accommodation conflict is often quoted as one of the contributors to why some people experience discomfort when watching stereoscopic 3D (S3D) films or TV.

When we look at a nearby object in real life, we focus our eyes at it (accommodation) and we rotate our eyeballs inward to allow each eye’s gaze to cross at the object (vergence). Distinct groups of muscles are in charge of these two operations and typically work in unison.

However, when we watch an S3D film, we focus at the screen plane, but we may converge our eyes in front of or behind the screen. This decoupling of vergence and accomodation can contribute to eye muscle fatigue.
Here's a great chart that makes it easy to understand both the edge clipping and the close focusing area where problems arise. The small usable frustrum in front is also clear.

http://srujanvfx.com/3d-cinematography.html



rnd1.png


Orange is the vergence/accomodation conflict, red is the screen edge clipping error. (Obviously, neither happen in 2D)

Half the distance from the screen is a significant distance in a normal theater and in a stereoscopic visor which projects essentially an image at a 20+ feet distance. But I'm thinking the great advantage of a wide 90 degrees visor is that the front "green" frustum is extremely wide. The engine would just clip (or fade out) the entire plane in front of the orange area. Also, the screen edge issues falls comfortably in the peripheral vision unless the user really tries to move his eyes significantly. The normal use of VR seems to be to move the head instead of the eyes, most public demonstration so far have show people do this intuitively.
 
The problem with 3D today is that it is not 3D at all. It's two 2D images. That's the part that's not natural.
That's the only part that is natural! We have two optical sensors that sample the world discretely. Unlike the ears that have cross-over of each other's signals, the eyes are completely discrete. 3D is perceived from two independent images that are compared, which can be recreated with 3D displays by presenting separate images to each eye. That's why VR is having such great responses from people, because it does recreate the source signals so well.
I'm just trying to get you guys to the correct conclusion of "why" there are these problems, and what is the actual culprit.
Well you're not doing very well. ;) I haven't seen a contrary argument. Go back to my original comment that you replied to. I'm talking about the faults on screen that aren't how the real world operates. I never said 3D hardware is broken, although there are notable limitations with trying to represent a 3D space spanning tens/hundreds of metres on a screen or two at a fixed distance. You also haven't actually specified your alternative opinion - you've only presented counterargument to views expressed.

That's your brain doing most of that actually, it's assuming almost all of it. The peripheral vision acuity is very, very poor. Especially if you are familiar with your surroundings (it makes a difference). You may feel it is sharper than it really is, but I can assure you it is not.
That doesn't matter. You don't perceive blur as it appears on screen. You gave an example that exhibited blur as we see it in films, but we don't see that blur at all.

The eye is not that much different from a camera, it still uses a lens an obeys all the laws a simple camera does.
Right, I already made this reference. It's approximately a 50mm lens on a 35mm camera with an aperture between f2 and f8.

I agree, but I attribute that to the directors, not the technology itself.
I'm not sure what this particular thread is discussing at this point. The technology allows for all sorts of 3D perspectives to be presented to the viewer. Using those which aren't natural causes a break in the perception and visual stress. Choosing perspectives that do work is the solution, but that's a solution imposed by the technology. An alternative technology, say holodecks, would allow directors to change shots across the full spectrum without causing eye strain, although I've no doubt that stretched perspectives will cause vertigo and/or nausea in many people.

Relating this to Morpheus, for games it'll not be an issue by and large as the game cameras are very simple, although artefacts like CA and DOF will have an impact as already discussed. It will still be an issue watching 3D movies or cinematic cutscenes that apply 2D movie techniques because you'll end up with an uncomfortable blend of 2D techniques with 3D stereoscopic presentation and the same issues we have watching 3D movies at the cinema. And if directors limited their techniques to those that work with the medium, they'll both create rather boring cinematography and content that doesn't work well on 2D due to being so bland (think home movie shot on a phone at the one focal length), limiting its appeal greatly.
 
That's the only part that is natural! We have two optical sensors that sample the world discretely. Unlike the ears that have cross-over of each other's signals, the eyes are completely discrete. 3D is perceived from two independent images that are compared, which can be recreated with 3D displays by presenting separate images to each eye. That's why VR is having such great responses from people, because it does recreate the source signals so well.
As I said earlier, 3D is not interpreted through binary vision for the most part in regular vision. This is the part you are missing.

Binary vision for 3D only works for close up distances, at most up to arms length. It does not work in real life for distance viewing and it is not meant to. Stereoscopy is not used for moderate or long distance resolving of depth.

This is the fundamental aspect that is incongruent between stereoscopy and real vision. The 3D we experience in stereoscopy uses binary vision exclusively. And because of this, changes in focus constantly strain the eyes. @MrFox I misinterpreted your "focus" statement earlier, you were correct in that.

In real life, for the same distances that are trying to be simulated in stereoscopy in movies/games, we do *not* use binary vision to interpret the 3D. This 3D is virtually entirely processed through parallax and relative movement. This is in part why some scenes may appear to have some exaggerated 3D effect, especially in the distance. We do not see 3D through binary vision at that distance, our heads are not that wide...

The stress of convergence is the reason why people get so many headaches during watching stereoscopy. It is absolutely not a true representation of 3D, to simply have two 2D images merged. Binary vision is not the full interpretation of 3D by a person in real life, in fact it is the least important part of 3D interpretation in real life.

That is not at all how the eyes, together with the brain, view the real world. At least not most of it.

Well you're not doing very well. ;) I haven't seen a contrary argument.
That is simply because you are ignoring it. It is all well and good to understand a few basic mechanical aspects of the visual tract, but you are failing to appreciate the real world application of vision which is not so well illustrated on Wikipedia. Take it from someone who has learned this topic. A camera analogy does not alone explain the problems with 3D.
Go back to my original comment that you replied to. I'm talking about the faults on screen that aren't how the real world operates. I never said 3D hardware is broken, although there are notable limitations with trying to represent a 3D space spanning tens/hundreds of metres on a screen or two at a fixed distance. You also haven't actually specified your alternative opinion - you've only presented counterargument to views expressed.
The bolded is the problem with stereoscopy, and it is something that can't be engineered out of stereoscopy as it is done today.

When using only 2 fixed images to represent 3D, especially those resolved so close to the eyes and meant to represent such large space

It is an intrinsic problem with the method of 3D today. It cannot be engineered out unless something is fundamentally changed with the method of delivery.
That doesn't matter. You don't perceive blur as it appears on screen. You gave an example that exhibited blur as we see it in films,
Exactly, and it would be the same in a 2D movie or a 3D movie. If you focused on that part of the screen, the resolution would not change. Because you as the viewer simply do not direct the focus in either case. That vergence is required in the 3D version shouldn't make you expect the resolving to change. That is an artistic aspect, and a separate discussion. Your complaint about the instance of "tap blur in Xmen" is misdirected because of it.

The actual intrinsic issues with stereoscopic 3D are not related to blur, director determined points of interest, or other extraneous special effects.
but we don't see that blur at all.
Incorrect. Your eyes and brain may resolve the image better/worse than a stylized photo or movie, key word being stylized, but that does not mean there is not a blur on unfocused objects.

This is completely contrary to real life, though most people don't appreciate how bad peripheral vision is because they don't pay attention to it. The blur in peripheral vision is due to the incredibly poor acuity of peripheral vision (the density of rods is incredibly sparse, and so are cones in the peripheral retina, that blur is directly related to that), and the blur due to focus in the central vision is a typical lens issue that you yourself have shown you can explain quite well.
 
Last edited by a moderator:
http://www.hdhead.com/?p=280
Here's a great chart that makes it easy to understand both the edge clipping and the close focusing area where problems arise. The small usable frustrum in front is also clear.

http://srujanvfx.com/3d-cinematography.html



rnd1.png


Orange is the vergence/accomodation conflict, red is the screen edge clipping error. (Obviously, neither happen in 2D)

Half the distance from the screen is a significant distance in a normal theater and in a stereoscopic visor which projects essentially an image at a 20+ feet distance. But I'm thinking the great advantage of a wide 90 degrees visor is that the front "green" frustum is extremely wide. The engine would just clip (or fade out) the entire plane in front of the orange area. Also, the screen edge issues falls comfortably in the peripheral vision unless the user really tries to move his eyes significantly. The normal use of VR seems to be to move the head instead of the eyes, most public demonstration so far have show people do this intuitively.
That is a very good representation. And given a chart like that, it's pretty clear that those actually making the 3D really do understand the true limitations of it.
 
Recently, Jim Ryan from Sony called Morpheus a technology exercise and being that the case it might never get released.

http://www.gamereactor.eu/news/2031...echnology+exercise",+release+still+uncertain/
It's the same cautious statement as usual, the journalist is twisting it into a bit of FUD. It's release depends on many external factors, and it will be release if it works... blah blah blah.... etc...

Occulus is coming out earlier, and that's going to make or break the entire industry. I think if Occulus fizzle out, the market is gone, and Morpheus probably won't come out.
 
Binary vision for 3D only works for close up distances, at most up to arms length. It does not work in real life for distance viewing and it is not meant to.
Fair point.

In real life, for the same distances that are trying to be simulated in stereoscopy in movies/games, we do *not* use binary vision to interpret the 3D.
I agree, in part. You said...
2D movies are easier on the eyes, because at distance not much farther than several centimeters, binary vision does not really contribute to any 3D effect. Beyond this very close distance, it is primarily parallax that contributes to our intepretation of depth.
Stereopsis is good for a few metres as the high fovea resolution compensates for the low separation between the eyes.

This 3D is virtually entirely processed through parallax and relative movement. This is in part why some scenes may appear to have some exaggerated 3D effect, especially in the distance. We do not see 3D through binary vision at that distance, our heads are not that wide...
It depends on the distance. Visual acuity is high enough that small differences in position can be resolved as depth.

It is absolutely not a true representation of 3D, to simply have two 2D images merged. Binary vision is not the full interpretation of 3D by a person in real life...
I agree, but it's an accurate 'input' to the visual sense as long as what's displayed on the two screens accurately matches what should be seen by the eyes. This would require foveated rendering I guess, and very accurate head tracking.

That is simply because you are ignoring it.
No, it's because you didn't present it. ;) You've actually explained your point clearly in this post by describing the place of stereoscopic vision in the whole faculty of human depth perception. Although I'm still not following the application of the argument in the discussion. Are you saying VR headsets will make for good 3D viewing, or bad, or what?
 
Occulus is coming out earlier, and that's going to make or break the entire industry. I think if Occulus fizzle out, the market is gone, and Morpheus probably won't come out.

Well if Bethesda wins the lawsuit the Rift might never get that chance.
 
Last edited by a moderator:
No, it's because you didn't present it. ;) You've actually explained your point clearly in this post by describing the place of stereoscopic vision in the whole faculty of human depth perception. Although I'm still not following the application of the argument in the discussion. Are you saying VR headsets will make for good 3D viewing, or bad, or what?

...Fine...

It's just to point out why it is so headache inducing. We need to know that, because if we want to avoid it or correct it with another product, that's a great thing to know.

The 3D illusion may be very sophisticated with newer editions, and may get even more sophisticated into the future, even using the same system. But improvements in those aspects may not correct the crux of the problem around headaches or eye strain or some other irritation for some/many users.

The only one that might correct that is some actually 3D production like a hypothetical hologram or a system like the 3D light shows using lasers against water.
It depends on the distance. Visual acuity is high enough that small differences in position can be resolved as depth.
Acuity is not for depth, not the correct term. The relative motion of edges is how we perceive depth at a distance, through parallax. There is actually a map in the brain arranged as a stack of cells to interpret moderately varying angles of flat edges.
 
Last edited by a moderator:
Acuity is not for depth, not the correct term.
Sorry. I just meant the density of the sensors. Acuity probably refers specifically to some optical function.

So in your opinion, what's the immediate future for VR? Is Morpheus et al going to be a bust because it'll strain the brain? Response from OVR seems all round positive (not that I'm following it closely).
 
Sorry. I just meant the density of the sensors. Acuity probably refers specifically to some optical function.

So in your opinion, what's the immediate future for VR? Is Morpheus et al going to be a bust because it'll strain the brain? Response from OVR seems all round positive (not that I'm following it closely).

Actually my main question is how long we will be sticking to the current versions, which is two screens up in your face, or the stereoscopy using glasses. Because whatever seems to be after that seems to be very far off.

I don't think the eye strain will be an incredible deal honestly, not in that it won't bother people, but in the way I don't see it being a big factor in the sales of the stuff, the excitement for VR with Oculus ever since it's beginnings has been very strong I think.

I think it's one of those things where if the experience is good enough, and I think both Oculus and Morpheus will be able to offer that when they hit the shelves, people will tolerate the problems. I don't think they are a dealbreaker, but it'd be great to see some solution for those problems.

It's kind of like how for many people, including myself, watching a CRT monitor for a long period of time was much harder than with LCD. But that's just what it was. And then LCD came and for myself at least I have much less issue with looking at a screen for a while if I have to.

I personally have not had much of a problem with IMAX 3D movies, I found them to be pretty nice. For me it's just becoming a bit less exciting. Maybe 3D games will be more my thing, but I'm not sure at this point...
 
Fun with stereo vision. Experiments, data, analysis and references.

I think real adult scientists wrote this, it seems all scientisty and all. But it's a long read and I'm lazy. :(

http://www.journalofvision.org/content/10/6/19.full
Very interesting paper.

From skimming it, the chart seems to say that under dark conditions, the estimates of distance, using binocular or monocular vision, are much poorer than in light conditions. It is in fact suggesting that in depth perception there is some component of binary vision.

That might at least be consistent, but not explained totally by, the fact that cones and fovea vision is poor in dark conditions. Whereas the rods which dominate peripheral vision are more light sensitive (very much related to how we spot objects in our peripheral vision and can dart our eyes towards it).

The more interesting part though is the gains from binocular vision at large distances, appears to be testing in conditions where a LED is up to 250 meters away, and it shows that binocular vision is more accurate in light OR dark conditions than monocular vision is (without parallax, I assume) in perceiving the same depth.

The less exciting part though is that in any case, at the distances being reviewed, the binary vision estimate of depth (also without parallax, I assume) is still very poor. In all the cases it appears the subjects very significantly underestimate the distance of the LED, though it also shows significant differences between three of four conditions.

With parallax, I assume, the depth perception would become far more accurate. So I think the paper is showing good evidence to say there is some small component of binary vision to depth perception even at long distances.
 
Fun with stereo vision. Experiments, data, analysis and references.

I think real adult scientists wrote this, it seems all scientisty and all.
You know it's good science when it's full of words you haven't even got a beginning of interpreting!

Although it seems to set out to show that stereopsis is a more prominent/functional part of human vision than originally thought and concludes as much, it doesn't weight the importance of stereopsis (from the importance of my skimming) and so won't help us determine if an absence of motion-based cues will be off-putting.

To summarise my latest thinking, the dual-screen setup is okay in supporting comfortable 3D viewing as long as the 3D is kept within the virtual comfort zone and motion tracking enables parallax and such. Keeping 3D within the comfort zone is likely very hard though, at least in games like Elder Scrolls. You'll need a 3D range covering real life, at least up to arm's length. I wonder if the 3D engine could be adaptive and expand/contract the depth based on environment?
 
Typed a response and then frigging BSOD'd lmao... back to the drawing board.

These paragraphs explain the results quite well.
As expected, the gains of our observers' depth estimates were significantly greater during binocular viewing compared to monocular viewing (F(1,7) = 16.65, p < 0.01). In these binocular viewing conditions, the gains of the observers' depth estimates were significantly greater during lit foreground conditions compared to viewing these LEDs in darkness (F(1,7) = 38.38, p < 0.001). We had also predicted that depth from disparity would be scaled according to the observation distance in the lit foreground conditions, with greater depths being seen at the larger of the two observation distances. Consistent with this prediction, we found that in lit binocular conditions, the gain was significantly larger for the 40- compared to the 20-m observation distance (F(1,7) = 32.386, p < 0.001). In fact, in these lit binocular conditions, the mean gain was almost twice as large for the 40-m observation distance trials (0.84 compared to 0.44 for the 20-m observation distance trials). As expected, gain was not found to vary significantly with the observation distance in the dark binocular conditions (F(1,7) = 5.69, p = 0.05). Overall, the above findings are consistent with partial stereoscopic depth constancy, as they show that as the observation distance increases, so too does the magnitude of the binocular depth estimate for the same level of disparity. However, binocular estimates of depth were far from veridical in the current experiment, even when the lit foreground of the tunnel provided rich cues to the observation distance. Binocular depth estimates in the light were, on average, 19% and 12% of their physical depths at the 20- and 40-m observation distances, respectively. However, they were considerably better than binocular depth estimates in the dark, which were only 5% and 2% of their physical depths at the 20- and 40-m observation distances, respectively.

As can be seen from Figure 3, binocular depth estimates were found to increase with disparity in both the lit foreground and dark conditions. In addition to the above analyses, we also fitted our depth data for the binocular conditions using Equation 1, with the observation distance as a free parameter. In binocular-lit conditions, the effective scaling distances obtained from these non-linear fits were significantly larger for 40-m observation distance conditions (12.9 ± 0.9 m) than for 20-m observation distance conditions (9.4 ± 0.3 m; 95% confidence intervals reported). This provides further evidence of observation distance-based differences in disparity scaling in the lit foreground conditions. By contrast, in binocular-dark conditions, the effective scaling distances were not significantly different for 40- (5.8 ± 0.5 m) and 20-m (5.5 ± 0.3 m) observation distance conditions. Since no useful information was available about the observation distance in these binocular-dark conditions, it seems likely that the visual system assumed a particular observation distance as the scale factor (e.g., similar to Gogel's notion of a specific distance tendency). Consistent with this notion, the effective scaling distances found for both observation distances in these binocular-dark conditions were very similar and quite close to Gogel's (1965) estimated specific distance tendency (of around 2–4 m).

The first experiment is saying that binocular vision of depth perception in both light and dark conditions is significantly better than monocular vision in either scenario. It shows the observers significantly underestimate the true depth in all conditions however.

In the third and fourth experiments, they adjust the binary horizontal angular disparity (represented in arcminutes, just another way to represent degrees), which is a representation of the distance between the perception of the object between the two eyes. They show when this is increased, the estimate of depth also increases.

This is represented in Figure 8. It also shows that for the same adjustments in horizontal angular disparity, there is a much more profound increase in the perception of depth at greater distances (20 m versus 40 m observation distance).

The gain at the 40 m observation distance is much stronger than at 20 m observation distance. In the same figure, it shows that in both observation distances, there is still an overall underestimation of the true depth.

How they adjusted this disparity, I do not know.

Also veridical means truthful. Chalk one up for pretentious writing, I've never been an advocate for such writing in research papers :)
 
Back
Top