Physical limitations of the human vision system and the impact on display tech

What you're mixing up is wave frequency versus event distribution frequencies.
Yes I was wrong and you're correct about this.
The thing is since our displays are limited to <200hz we can't really test this, but perhaps we could use sound as an analogous way of simulating it as WRT sound hardware vs monitor hardware, we have much better resolution on our PC's.
I wouldn't mind creating an aural version of the visual test.
Anyone have any ideas what would be a good aural replacement? and I'll have a go a creating it.
 
It's hard to visualize a practical analogous test, as the eyes and the stuff they're detecting have so much more dimensionality. An ear is a single detector, which directly relays frequency information. A "sound event" "analogous" to a photon would be some arbitrary whacky construct, and it would have to have a long enough duration given our 20kHz limit that if they were occurring at high rates, you'd have a bunch of stuff interacting simultaneously with a single receptor and potentially doing things like cancelling each other. It's not valid to define an event as a shorter thing, because then its components would land outside of the frequency range of the ear receptor, unlike a light photon which by definition has a frequency that trips receptors in the eye.

It's a very different situation than the eye, where the event frequency is in the hundreds of terahertz (if you aren't treating it like a discrete interaction), and the artifact in question is the events triggering different receptors.
 
It's not a brain interpolation thing. It's smudging due to eyeball exposure.
I concluded it's likely both. You get a blur in either:

1) you expect an object to be at location A but the screen shows it still at location B. Your eye trcks between these two locations and you see the shutter time/persistance of vision/rotational blurring you describe
2) you see the object only at the location A still shown on the screen and still see the blur.

You can verify #2 this with eye tracking. You can watch a test scene with an object that is blurred. It will maintain a fixed amount of blur all the time it is moving at a constant speed. Now, use an eye tracker to film where you look while watching the scene. For the persistance of motion model #1 above to be the only thing going on it would require your eye to track linearly between the predicted location and the displayed location of the obejcts. If it moved to another location in between that wouldn't work because your eye would blur along the direction of motion of the eye. When you look at the eye tracking output you'll notice that your eye doesn't track linearly along the path of all the blurred moving objects in the scene so the effect isn't simply a result like leaving the shutter open on a camera. If it was it'd be more like a blur caused by swinging the camera around your head as the eye tracks very erratically and quickly around the screen with a speed and motion not very well related to the motion of the objects in the scene.

The best explanation seemed to be that your brain actually fills in the objects at their predicted locations regardless of if you look at them or not, ie: it builds up a full image but only observes tiny fractions of the image with the eye. The blurring happens only in the presence of information that disagrees with the predicted image though. Blurring doesn't require the eye to actually visit both locations and if it doesn't visit both locations and track across the intermediate points then the open shutter model is insufficient to describe what's happening when we see blur.

Your eye ball tracks around a screen far far faster than the objects move around the picture. It doesn't have time to visit every part of the screen and your brain is literally in-painting the image for you.

It's pretty much exactly the same thing as sample-and-hold, but with a flickered image, meaning the artifact resolves as separated ghosts rather than a continuous smudge.
Correct... seemed worth mentioning in a display thread as its relevance is that it's a sample and hold type effect that's present on screens that don't utilise sample and hold. Which is probably non obvious?

Uneven cadence doesn't in and of itself cause blurring.
It's actually a clearly visible effect through experimentation but the theory's a bit shaky. The best I could come up with is that it's closely tied in with point 1 above. Given you don't seem to agree that the brain creates an image in the absence of needing your eye to actually look at a location you probably won't agree with this one. If however you accept premise 1 for a moment (ie: your brain creates an image of the object at it's predicted location which is then blurred against the displayed image) you'll see why. The result of an uneven cadence is that the brain doesn't know exactly where to place the object. It doesn't compromise on a 3:2 cadence by placing the object at 2.5 units along it's path, but rather creates two versions of the object (one at 2 units along the path and one at 3 units along the path). It then blurs both of these predicted objects with the displayed object location and gives an increased perception of blurring.


PS: I think we largely agree on all fronts and what you're describing is exactly correct. The only difference being that I found you could see blurring even when you didn't actually look at both locations. The scientific literature was miles behind the engineering when I was looking at this 5-6 years ago but perhaps its moved on significantly since. The brain inpainting the object was the only explanation I could come up with that fits with the real world results that don't require your eye to track along the path of the blur. In reality your eye doesn't track along the directions the blur occurs in which would be necessary for the purely rotational/persistence of vision cause you're describing. Persistance blur is a great mdel of what's going on - it's just not complete. I'm not 100% my model is correct either but it does cover a fairly fundamental flaw in the persistance blur model.
 
Last edited:
For the persistance of motion model #1 above to be the only thing going on it would require your eye to track linearly between the predicted location and the displayed location of the obejcts.
The persistence model predicts that the eye is working in such a way that moving objects that aren't being eye-tracked will experience "real" motion blur, thus no moving object will ever appear crystal clear on a SAH display.

Given you don't seem to agree that the brain creates an image in the absence of needing your eye to actually look at a location
I agree that there are phenomena that fit that description, I'm just not convinced that you've been exhaustive.

As far as judder causing blur, are there good discussions on this? I've never noticed it, but as far as consistent judders go, I haven't experienced much where I'm sure the source video quality was good enough that subtle blurrings would be noticeable, heh.
(And what I'm particularly interested in is if it's measurable with different judder patterns or lack of patterns, and if it's most prominent at certain framerates. For instance, if it's strongest with patterned judders and is still notable at quite high framerates, this would support your suggestion; if it's strongest at low framerates, it could be that what's going on is eye-tracking is being shaken up, and the pure persistence model is still plausible.)

Correct... seemed worth mentioning in a display thread as its relevance is that it's a sample and hold type effect that's present on screens that don't utilise sample and hold. Which is probably non obvious?
Maybe. I guess pondering low-framerate video has left a chunk of my mind thinking of SAH and doubling and even interlacing (in the case of 30fps->60Hz conversion) as slight variations of the same thing, all being different forms of smearing in the gap where flicker would otherwise exist. But that's tangential to the discussion, my wording was probably odd as my mind drifted in from a different discussion long ago.
 
Last edited:
I came at all of this from a slightly weird angle of building TVs before HD content was really available and at the time it looked like plasma and laser would become the dominant high end display technologies (LCDs were pretty rubbish around the start of the early 720p / "HD Ready" era). Not looked at it in many years.

As far as judder causing blur, are there good discussions on this? I've never noticed it, but as far as consistent judders go, I haven't experienced much where I'm sure the source video quality was good enough that subtle blurrings would be noticeable, heh.
I didn't find a lot of detailed papers on any of this stuff I'm afraid... Most the psychology/medical literature was simply wrong as it was written long before it was possible to interactively work with >100fps images. From memory I could reliably reproduce the increased blurring effect by creating my own artificial video from a massive image and then create a video panning across the image with a tool like avisynth to control the cadence. Play the video back uncompressed (a much easier task now SSDs are available!) and you should see a blur-like effect that varies with the cadence used. It's hard to work out if it's blur or just the juddering that's the most distracting.

If you take a 24fps input stream and then create two 120fps variants with frame repeat (one with a 5:5 cadence, one with a 6:4 cadence) you should see the uneven cadence version will look MUCH worse. We struggled to get consensus on why it looked worse and the terminology for what was juddering, blur, etc gets a bit muddy as it seems different people are more sensitive to different artifacts. Even more confusingly, "expert" viewers are radically more capable of seeing problems than people who don't know what they're looking for even if they don't know how to describe what they're seeing.
 
In fact - ignore that cadence example. The blurring in the example I gave would be attributable to normal sample and hold blur. Think it'd be better illustrated by 3,2,3,2 -v- 3, 3,2,2 cadences but can't remember the experiment anymore.
 
Every time a photoreceptor cell is activated a whole internal chemical cascade has to happen, and it takes time for it to occur and for it to be restored back to the original state. All of this is considerable time.

The signals from the eye are sent by action potentials at rates, that if I'm not mistaken, are usually significantly under 100 per second. The action potential lasts about 1 millisecond and the refractory period after it about 2 millisecond, also metabolically the system can't handle sustained 100s signals per second rates.

There are estimated bit rates for the optic nerve, and they most certainly do not show infinite framerate.
From these recordings, the researchers calculated that a guinea pig retina transfers data at about 875 kilobits per second. Human retinas have about ten times as many ganglion cells, giving a “bandwidth” of 8.75 megabits per second.

But it could be faster, says physicist Vijay Balasubramanian, one of the study’s authors. “Each neuron is capable of firing close to once a millisecond, but average activity is only four times a second,” he says. “You have to ask: Why is this?”https://www.newscientist.com/article/dn9633-calculating-the-speed-of-sight/

When nearby or right on the same place stimuli is placed on close temporal proximity to a short stimulus it can result in masking and that stimulus not being perceived at all.

Also if there is short enough distance in time and space, the brain can and will hallucinate an interpolation for example between two different colored lights(it will hallucinate intervening moments were the light moves from one location to another and change color along the way).

Further, I've heard that at the thalamus, one of its functions is to step down the already low rate of spikes to a more manageable level by retransmitting at an even lower rate.

The brain itself also tends to send signals at under 100 spikes per second between areas, if I'm not mistaken, for example visual areas, and there are known estimates for the bit rates of spikes.

Even the wagon wheel illusion, can occur under continuous illumination, something that many take it to suggest a discrete frames reality of perception.

Heck, just take your hand and move it left and right with a bit of speed, and you'll start seeing multiple images of the hand, even at such low speeds. And again the same goes, go watch rain hitting a puddle, pay attention it looks like a series of still frames one after another, way way different from what you see when water hits a body of water in one of those very high frames per second cameras(which by the way catch a lot of details that are basically imperceptible e.g. balloon exploding).

This is not what you experience when you see a bubble burst
What you experience is the bubble basically instantly disappearing when bursting open.

I'm very very doubtful the eye can distinguish between video(s) at hundreds of frames per second let alone at infinite frames per second.
 
I'm very very doubtful the eye can distinguish between video(s) at hundreds of frames per second let alone at infinite frames per second.
Nobody's claimed that. If framerate is too high, several (or even many) frames will get baked together by the eye/brain. If there's a lot of movement in those frames - like if you're fast-forwarding a movie - the end result will look like a messy blur and be indecipherable.

Basically however, at no point is there such a thing as a flash too fast to be seen though, assuming it releases sufficient photons to trigger the eye's receptors. But such events would occur at a speed far beyond the human ability to judge its duration - you'd percieve a brief, momentary burst of light (and then maybe a negative afterimage), and it'd feel the same length regardless if it was short or extremely short...
 
Nobody's claimed that. If framerate is too high, several (or even many) frames will get baked together by the eye/brain. If there's a lot of movement in those frames - like if you're fast-forwarding a movie - the end result will look like a messy blur and be indecipherable.

Basically however, at no point is there such a thing as a flash too fast to be seen though, assuming it releases sufficient photons to trigger the eye's receptors. But such events would occur at a speed far beyond the human ability to judge its duration - you'd percieve a brief, momentary burst of light (and then maybe a negative afterimage), and it'd feel the same length regardless if it was short or extremely short...

Well, perhaps for simple flashes, and they'd have to involve quite a bit of photons to even be detectable if you're under bright illumination. Depending on speed between flashes it might not be possible to tell the order, or it might seem like they're all simultaneous. For a series of consecutive fast flashes, there will be a limit to the size of the apparent sequence within one second, you will never be able to say experience a sequence of thousand discrete events within that time span.

Also, such a flash if it is of short duration and it is located within a region of visual field where it is preceded and followed by other complex stimuli(masking stimuli), I believe it is likely it won't even be perceived(at least that is the case for faces, images, words, etc). For display technologies where you have a constant stream of complex images, this type of phenomena likely affects what can be distinguished within a few frames(e.g. the phenomena of subliminal imagery, where even complex images embedded in video are not consciously perceived.).
 
Yes, but even with the delay that they receptors and brain has with regards to image received, processed, reset. The higher the frames per second if a display, the closer it'll match the natural cadence of human vision (which will vary from person to person to some extent).

At lower refresh intervals your eye will easily note the intervals between frames. The higher you go the smoother the perceptible motion. Once you go above the threshold of the eye to track every single individual frame, the eye will be picking up frames as it receives, processes, and resets.

So, for example. If we make some extremely simple assumptions. Lets say one individuals eye has an artificial limit of being able to process 78 individual frames per second. A 60 hz stream wouldn't be smooth as each refresh isn't hitting on the same cycles as the eye is processing everything else surrounding the display, but it'd certainly be smoother than a 30 hz stream. A 120 hz stream would be too high for the eye to process each and every frame, however, each frame it does pick up will be unique and lead to a relatively smoother perception of motion. Still not perfects as it isn't catching an even multiple of 78 frames.

We're unlikely to ever see a display that does an even multiple of that arbitrary 78. However, the higher the frequency of images and closer to a smooth even transition between the frame currently being processed and the next frame to be processed the closer to smooth perception of motion the eye and brain will have. Hence, even if the frequency with which the display is updated exceeds the hypothetical limits of the human visual system it will still appear perceptibly smoother up until a point where the irregularity of frames chosen is close enough to the ideal that it will be indistinguishable from the ideal.

Of course, that's all extremely simplistic. The eye has the ability to capture images out of sequence (whatever you wish to call it). The example of they eye being able to react to a camera flash that happens at 1/1000's of a second or a single tracer round fired out of an automatic cannon spewing out 6000+ rounds per minute show that the eye and brain can perceive things that happen extremely quickly, regardless of what the eye is processing at the moment those incidents occur. That kind of goes back to something mentioned earlier in the thread. Yes, it is possible for the eye to catch a bullet in flight. The bullet, course is traveling so fast it'll trigger multiple receptors along it's path of flight thus appearing more as a streak of light than an individual bullet. But the eye will ALWAYS catch that single individual bullet, except in the case below.

Of course, it also can't distinguish between multiple instances of those happening in very quick succession. If enough of those 6000+ rounds fired in a minute are tracer rounds it'll appear as a stream of light rather than a single streak of light. If there are enough of those 1/1000th of a second flashs in a second, it'll eventually appear as a continuous light source.

And yes, as HTupolev mentioned, the faster the visual event (isolated visual event not a continuous stream), the brighter it has to be to register. Hence a single tracer round is easy for any human being to see, but a regular bullet isn't. A camera flash is easy for anyone to perceive, but a normal household light bulb flashing for 1/1000th of a second may or may not be perceptible depending on the ambient lighting conditions.

Regards,
SB
 
It would be really interesting to see the difference between 24, 30, 48, 60, 120, 144 fps videos. Maybe one could construct something like that using slow motion videos?
 
It would be really interesting to see the difference between 24, 30, 48, 60, 120, 144 fps videos. Maybe one could construct something like that using slow motion videos?

But one'd still need a proper way to show these videos. And some kind of a "temporal low pass filter" might be required.
 
But one'd still need a proper way to show these videos. And some kind of a "temporal low pass filter" might be required.

There are 144 fps monitors right now. Also, media players such as Mplayer should be able to play such video, although I suspect anything over 60 fps is not tested.

I do not understand what the filter you mention should do.
 
I do not understand what the filter you mention should do.

We need something that can make a proper reconstruction of the signals in the time domain. For current display technologies, if you look at the time domain, what they do is basically sample and hold, which creates an aliased reconstruction. This is not unlike when you zoom in a picture and every pixel becomes a square, and the whole thing looks like a mess.

So basically each pixels need to be programmed to light up and turn dark a little slower, according to the incoming signal. However, current LCD pixels probably do not react fast enough to do that (some TN panels are probably fast enough).
 
But one'd still need a proper way to show these videos. And some kind of a "temporal low pass filter" might be required.

There are 144 fps monitors right now. Also, media players such as Mplayer should be able to play such video, although I suspect anything over 60 fps is not tested.

I do not understand what the filter you mention should do.

I would like if 480p 60fps video was available for a start. Youtube has 720p 60fps, tit is so incredibly CPU hungry in a web browser (due to severe inefficiencies of software-decoded web video) that it's just slower or jerkier than the regular 720p version. Playing in VLC or other would fix the problem instantly but video downloaders and/or media players are confused by the "advanced" streaming format (maybe there is bleeding edge software that can deal with that. That would be a pain to run under most linux installations..)

With the right hardware and OS, I guess you can play the 60fps videos without giving a second thought.
Perhaps some hardware can play (accelerated) 30 fps video but not 60 fps video? Having to upgrade hardware because of browser performance deficiency is a hurdle.
With no hardware accelerated video.. it's a safer bet to run a separate browser instance to guarantee smooth playback of even 360p video.
 
"Regardless of this, simple maths shows that motion of the camera or of objects within the scene at speeds higher than three pixels per field/frame eliminates all of the additional detail gained by the use of high definition, in the direction of motion. This effect is illustrated in Fig. 2. These problems will be compounded by any future increases in the spatial resolution of television."

Interesting stuff!
 
Physical limitations of the human vision system and it's impact on display tech

Its impact. Its. Please. It's its, not it's. Please. Before I hurt myself.
 
Back
Top