Bigus Dickus said:
Not that I disagree with you Chalnoth, but I'll point out that a noticeable drop in brightness != black. It could be, but that isn't a necessary conclusion.
It is. I essentially confirmed it myself with my previous experiment. As I said, if waving an object quickly in front of the screen (could really be any object, though narrow ones work best) results in multiple descreet images, then that means that the screen goes completely black between frames, and very quickly (completely black meaning it is no longer emitting any light).
If the screen merely got "mostly dark" between frames, then the above would still yield a blur.
In the end, I think I've shown pretty well that there's a big difference between the "flicker frequency" and what is required to not detect a large color difference in smaller time increments. This is a similar argument to the reasoning behind film and TV having a higher screen refresh rate than frame rate. Remember that film is on a moving reel, and so there must be a fast-moving shutter that only displays the frame for a split-second in order for the picture on the movie screen to not be in motion. In both situations, for different reasons, a base "flicker frequency" must be avoided, but we do not necessarily need frame updates this quickly to get a good idea of movement.
This all harks back to the idea that there are two sorts of receptors in our eyes, rods and cones. I believe it's the rods that detect only brightness, and they tend to respond much more quickly and are more sensitive than cones. In other words, our eyes are more sensitive to flicker if there is very high contrast. If the contrast is low, the cones take over, and they are much slower to respond (in the range of 1/8th of a second), resulting in a much smoother image.
But for FSAA, a changing sample pattern should improve the effective number of samples, provided that the contrast of roughly 1/8th of a second worth of averaged frames does not change appreciably.
As an example, with the idea I gave above, imagine a pixel that is half black and half white. Let's say that we'll have a threshold of only wanting approximately 1/256 color difference as seen by the eye (at 60 fps, that's every 8 frames). If white is 1 and black is 0, then the standard deviation of each sample is 1/2. The standard deviation as seen by our eyes will be 1/2 / sqrt
, where n is the total number of samples averaged in our eyes. If we want the standard deviation to be no larger than 1/256, then n=(256/2)^2=16384. At 8 frames averaged in our eye, this would require 2048 samples per pixel per frame. So, perhaps the major flaw here is that it will take too much to get the point where the changing of pixels over the 8-frame period is small enough not to be detected. Eventually it may happen, but most likely that will be with motion blur also taken into account.
At the same time, the idea may still work much sooner if it isn't completely random, but is instead cyclic. I don't think this can really result in images that will be great compared to eventual fully-random sampling, but a sparse-sampled cyclic pattern may be good.