The Future of Anti Aliasing? Will the Quincunx-Like Blur Technologies Return?

ERK said:
Oh, and it this related to the reason why consumer cards are not generally useful for pro applications since they don't AA lines very well, whereas the pro cards do? Do the pro cards have a different filter kernel for this, or is it something else?
Thanks again,
ERK
Well, you don't want to draw lines in professional applications that are thinner than pixels. So your first step would be to have the hardware draw the lines so that they are on the order of one pixel in width. That will help significantly, with just standard multisampling.

But even better is the fact that it is relatively easy to do the antialiasing for lines analytically, with a much more robust filter pattern. From what I understand, this is what professional cards actually do.
 
aths said:
Do you want to say that a certain amount of blurring can be desired?

73aa5f46fb904f27a31da2c4aa25.gif


The figure on the left is a triangle drawn on a grid of perfectly non overlapping square pixels and a "perfect filter" with infinite number of samples and gamma correction. Beyond a certain point all this computation power is wasted and the result doesn't come out as better than that.
 
ERK said:
Please correct me if I'm wrong here, but it seems that a simple application of Nyquist theorem implies that to eliminate aliasing, one should implement a filter at half the spatial frequency of the pixel pitch, in other words, 2 pixels wide.
ERK
However, IIRC Nyquist's theory says the reconstruction needs to use sinc functions in order to achieve that. If you have something else, you have to have an even higher sampling rate => a wavelength > 2 pixels wide :(
 
ERK said:
But I'm still hung up on Simon F's original thought experiment: of the thin line (1/4 pixel wide) moving across the screen. Xmas(?) correctly pointed out that the motion would 'jump' from pixel to pixel. To me this means that if the line were not perfectly vertical, say at some small angle from vertical, then the result would be massive jaggies.

I still don't see how this situation could be fixed with any filter that only samples from within the 'little square.' Even and especially the (infinitely sampled) box filter.

Sorry for being dense.
No, you are not being dense. The whole point of the example was to show that a box filter is far from ideal!
 
LeGreg said:
73aa5f46fb904f27a31da2c4aa25.gif


The figure on the left is a triangle drawn on a grid of perfectly non overlapping square pixels and a "perfect filter" with infinite number of samples and gamma correction. Beyond a certain point all this computation power is wasted and the result doesn't come out as better than that.
With blurring, we will not have the nice and clear output as you show in the left figure. If I understand the fans of blurring right, the left figure is not the optimal result.


Simon F said:
However, IIRC Nyquist's theory says the reconstruction needs to use sinc functions in order to achieve that. If you have something else, you have to have an even higher sampling rate => a wavelength > 2 pixels wide :(
With a digital output, I don't see the point to reconstruct the signal like an audio signal. The reconstruction filter gets us many additional results in-between to deliver an analogue signal. Lets say we have a digital photograph from a full-resolution rgb ccd chip. I see nothing to reconstruct, the photograph can just be displayed 1:1. High AA levels attempt to simulate more than one photon (light color) per pixel, to better reflect that the actual displayed is a little square. Correct me, if I am wrong, though.
 
Last edited by a moderator:
LeGreg said:
The figure on the left is a triangle drawn on a grid of perfectly non overlapping square pixels and a "perfect filter" with infinite number of samples and gamma correction. Beyond a certain point all this computation power is wasted and the result doesn't come out as better than that.
For that particular case more than 2 samples are unnecessary for a pixel-sized box filter. If the triangle is moving, however, more samples mean a more accurate representation. You forgot the explanation why the blurred edge in the right image should generally be considered better.

ERK said:
But I'm still hung up on Simon F's original thought experiment: of the thin line (1/4 pixel wide) moving across the screen. Xmas(?) correctly pointed out that the motion would 'jump' from pixel to pixel. To me this means that if the line were not perfectly vertical, say at some small angle from vertical, then the result would be massive jaggies.
Yes, if you have a very thin line and lots of samples, and that line goes through the pixel corners every Nth pixel, the line will have jaggies. Since the line is thin, it will only have a small influence on the final pixel color and therefore not be very visible – unless we do HDR rendering, where the contribution to the final pixel color could be huge.

I still don't see how this situation could be fixed with any filter that only samples from within the 'little square.' Even and especially the (infinitely sampled) box filter.
In the case of a line, one possible way to alleviate this is to widen the line to one pixel and use the original width as alpha value. But that comes with all the typical alpha blend drawbacks. Way more difficult for "real" polygons.
For a general solution, you truly need a filter kernel that is larger than a pixel.


Sampling theory says that if we have a band-limited signal and sample it at >2x the cutoff frequency, we can reconstruct the signal from the samples.
But geometry is not band-limited, while we have a limited target resolution. So the best we can do is take as many samples as possible (and there will still be some aliasing), then low-pass and resample the sampled signal to the target resolution. Reconstruction happens on the screen surface and in our eyes and brain, though not with a sinc.

So the goal is to find a resampling filter that is a reasonable low-pass filter, cheap to perform, and ultimately looks good.
 
Xmas said:
Sampling theory says that if we have a band-limited signal and sample it at >2x the cutoff frequency, we can reconstruct the signal from the samples.
But geometry is not band-limited, while we have a limited target resolution. So the best we can do is take as many samples as possible (and there will still be some aliasing), then low-pass and resample the sampled signal to the target resolution. Reconstruction happens on the screen surface and in our eyes and brain, though not with a sinc.

So the goal is to find a resampling filter that is a reasonable low-pass filter, cheap to perform, and ultimately looks good.

Thanks, that was very clear.
 
Now, a few years later, I'm interested in hearing what's the consencus, if any, on the "blur filter AA" and what has become of the opinion of those who thought that Quincunx wasn't a dead-end. Did they change their mind, or are they clinging on the "More samples will give Quincunx a boost in quality!".

We are now in 2014 and "blur filter AA" are back. FXAA MLAA SMAA, etc... are all much smarter than Quincunx but the idea is similar : do not render more samples but use clever techniques to smooth out jaggies.
 
Pro tip: don't necro ancient threads to add nothing but useless fluff (FXAA etc are nothing new at this stage), unless you want to advertise yourself as a probable spammer account... :)
 
Back
Top