LeGreg said:
The figure on the left is a triangle drawn on a grid of perfectly non overlapping square pixels and a "perfect filter" with infinite number of samples and gamma correction. Beyond a certain point all this computation power is wasted and the result doesn't come out as better than that.
For that particular case more than 2 samples are unnecessary for a pixel-sized box filter. If the triangle is moving, however, more samples mean a more accurate representation. You forgot the explanation why the blurred edge in the right image should generally be considered better.
ERK said:
But I'm still hung up on Simon F's original thought experiment: of the thin line (1/4 pixel wide) moving across the screen. Xmas(?) correctly pointed out that the motion would 'jump' from pixel to pixel. To me this means that if the line were not perfectly vertical, say at some small angle from vertical, then the result would be massive jaggies.
Yes, if you have a very thin line and lots of samples, and that line goes through the pixel corners every Nth pixel, the line will have jaggies. Since the line is thin, it will only have a small influence on the final pixel color and therefore not be very visible – unless we do HDR rendering, where the contribution to the final pixel color could be huge.
I still don't see how this situation could be fixed with any filter that only samples from within the 'little square.' Even and especially the (infinitely sampled) box filter.
In the case of a line, one possible way to alleviate this is to widen the line to one pixel and use the original width as alpha value. But that comes with all the typical alpha blend drawbacks. Way more difficult for "real" polygons.
For a general solution, you truly need a filter kernel that is larger than a pixel.
Sampling theory says that if we have a band-limited signal and sample it at >2x the cutoff frequency, we can reconstruct the signal from the samples.
But geometry is not band-limited, while we have a limited target resolution. So the best we can do is take as many samples as possible (and there will still be some aliasing), then low-pass and resample the sampled signal to the target resolution. Reconstruction happens on the screen surface and in our eyes and brain, though not with a sinc.
So the goal is to find a resampling filter that is a reasonable low-pass filter, cheap to perform, and ultimately looks good.