If you use
that reasoning, moving all the original pixels to random positions would be just as correct.
I don't follow.
A physical pixel is a rectangle with extents. To determine the color of that pixel, finite samples should be taken to make the best representation of the pixel interior. Approximating the sum of all surface areas*surface colors inside the pixel's extents (compensated for gamma) is the target.
If you have one sample, the point at the center of the pixel rectangle is your best bet.
If you have multiple samples, rotated patterns, and later sparse (~n-queens) patterns are the most attractive because they can differentiate more edges that might run through the pixel at angles where the impact on the final color is large. It has been proven often enough in theory and practice that 4xRG >> 2x2OG and 6xSparse >> 4x4 OG.
I can see the value in having multiple
carfefully selected sample patterns to choose from on a per-pixel basis, but I'm certainly not a proponent of fully randomized sample grids. I don't see where I implied that.
Simon F said:
Quincunx, because it uses a tent filter of 5 samples, should produce better results than a box filter with just 2 samples. Having said this, a tent filter will begin to attenuate some frequencies you would like to keep a bit more than a box filter but it does a much better job of removing illegally high frequencies that you need to eliminate. Of course, with only 2 MSAA to start with, neither filter is going to produce outstanding results.
Contrast ratio between neighbours is all that Quincunx can reduce. There will still be signals with the frequency inherent to the pixel grid, just like before (I'd argue that you construct a grid of rectangles, but sample a cloud of points (just like textures), might throw more words at you if you want).
Try something like a dense chain-link fence purely implemented with alpha test. No matter what
post-process you apply, until your filter is so brutal that all you see is constant grey, it will alias like hell, at the same frequency to boot with all wavering blobs of wrong colors and gaps at the same places, just with dropped contrast ratio between neighbour pixels.
Simon F said:
As for "destroying" information, I doubt a tent filter destroys any more information than the box filter. I don't have time to prove that, but it seems likely to me.
If you hurl a filter on your entire screen it is not possible anymore to produce a sharp 100% edge between neighbours. But that's exactly what should happen (and does happen with MSAA, no matter the sample count) if e.g. the left pixel is fully covered by a white surface and the right pixel by a black surface.