Seeing the Grids(Score:5, Informative)
by Doc Ruby (173196) on Tuesday April 10, @12:44PM (#18677583)
(
http://slashdot.org/~Doc Ruby/journal | Last Journal: Thursday March 31, @02:48PM)
* Keep in mind this article is written in general terms, so you scientists out there don't need to stand in line to file corrections!
I was in the Joint Photographic Experts Group (JPEG) when we invented the popular image format. While I worked for a digital camera company inventing an 8Kx8K pixel (40bits color) scanner, having studied in pre-med college both the physics of light and brain neurology of the visual system. So I'll just jump that line of "scientists" to file this correction.
* It's safe to say, however, that increasing resolution and image refresh rate alone are not enough to provide a startlingly better viewing experience in a typical flat panel or rear projection residential installation.
It's safe to say that only once you've dismissed the scientists who would correct you.
The lockstep TV screen is a sitting duck for the real operation of they eyes & brain which compensate for relatively low sampling rates with massively parallel async processing in 4D.
Joseph Cornwall's mistake in his article is to talk like viewers are a single stationary eye nailed at precisely 8' perpendicular to a 50" flat TV, sampling the picture in perfect sync with the TV's framerate. But instead, the visual system is an oculomotor system, two "moving eyes", with continuous/asynchronous sampling. Each retinal cell signals at a base rate of about 40Hz per neuron. But adjacent neurons drift across different TV pixels coming through the eyes' lenses, while those neurons are independently/asynchronously modulating under the light. Those neurons are distributed in a stochastic pattern in the retina which will not coincide with any rectangular (or regular organization of any linear distribution) grid. The visual cortex is composed of layered sheets of neurons which compare adjacent neurons for their own "difference" signal, as well as corresponding regions from each eye. The eyes dart, roll and twitch across the image, the head shakes and waves. So the brain winds up getting lots of subsamples of the image. The main artifact of the TV the eye sees is the grid itself, which used to be only a stack of lines (of nicely continuous color in each line, on analog raster TVs). When compared retinal neurons are signaling at around 40Hz, but at slightly different phase offsets, the cortex sheets can detect that heterodyne at extremely high "beat" frequencies, passing a "buzz" to the rest of the brain that indicates a difference where there is none in the original object rendered into a grid on the TV. Plus all that neural apparatus is an excellent edge enhancer, both in space (the pixels) and in time (the regular screen refresh).
Greater resolution gives the eyes more info to combine into the brain's image. The extra pixels make the grid turn from edges into more of a texture, with retinal cells resampling more pixels. The faster refresh rate means each retinal neuron has more chance to get light coordinated with its async neighbors, averaged by the retinal persistence into a single flow of frequency and amplitude modulation along the optic and other nerves.
In fact, the faster refresh is the best part. That's why I got a 50" 1080p DLP: the micromirrors can flip thousands of times a second (LCD doesn't help, and plasma as it's own different pros/cons). 1600x1200 is 1.92Mpxl, at 24bit is 46.08Mb per image. 30Hz refresh would be 1.3824Gbps. But the HDMI cable delivering the image to the DLP is 10.2Gbps, so that's over 200FPS. I'm sure that we'll see better video for at least most of that range, if not all of it. What I'd really like to see is async DLP micromirrors, that flips mirrors off the "frame grid". At first probably just some displacement from the frame boundary, especially if the displacement changes unpredictably each flip. Later maybe a stochastic shift - all to make the image flow more continuously, rather than offering a steady beat the brain/eyes can detect. And also a stochastic distribution of the mirrors (or their projected pixels). The more the projector goes off the time/space grid, the happier the eyes will send the image to our imaginations without passing the mesh packaging.
If only the TV content was improving as fast as the TVs themselves.