The Future of Anti Aliasing? Will the Quincunx-Like Blur Technologies Return?

Well, in a worst-case scenario you might have a situation where you render the far-away objects first, and due to too many triangles within the pixel have to throw out some pixel sub-samples. But since you are unsure as to the z value of those subsamples, you cannot write to those samples even if you fully-occlude the pixel later. For this issue to become noticeable at all you also need the foreground object to have triangle intersections in the same pixel.
 
Quincunx.....ugh, if I wanted my graphics to be blurry and jaggies less visible, I'd use the svideo output on my video card, plug it into the svideo in on my video card, and then play my games in dscaler. Or just output to a TV.
 
Ghost of D3D said:
I'd appreciate it if you could elaborate on the last parts regarding occluded objects. Thanks.

Entering late (I was busy in my other life)... I assume that arjan means that if an object would be fully occluded in the non-lossy version of the image, it should have zero effect on the lossy version of the image as well.

Imagine a black oject that is completely occluded by a white object, with both objects composed of lots of little triangles. If a lossy algorithm allows any black samples to show through, then there will be funny grey artifacts crawling around on the white object. That'd be pretty ugly.

Standard MSAA and SSAA don't have this problem because each sample is a point, so it is entirely in one object or the other. Area-based AA algorithms can have this problem, if one isn't careful, e.g. if coincident triangle edges can ever fail to fully cover a pixel.

Enjoy, Aranfell
 
wishiknew said:
Where are you SA? I'm begining to think msaa will be it.
Possibly, though I find it rather likely that current hybrid supersampling techniques will be extended.
 
I posted these links a little while ago in a thread in the 3D architecture and coding forum, but they are pretty relevent here:

http://www.hpl.hp.com/research/mmsl/...ng/index.html#

and the paper:

http://www.hpl.hp.com/techreports/19...1999-121R1.pdf

Basically a while back HP did a pretty extensive study of various AA sample pattern/filter combinations with interesting (though perhaps expected) results. There are filter comparison images in the first link, along with videos showing movement with each combination.

Edit: I wonder if any time in the near future an adapative jittered grid solution might be acceptable at low resolutions. Combined with a tent filter it might not be so bad in motion if you take enough samples where it counts.

Nite_Hawk
 
Last edited by a moderator:
Chalnoth said:
Possibly, though I find it rather likely that current hybrid supersampling techniques will be extended.

Extended in what way? We got adaptive transparency AA this generation and we might get adaptive shader AA in a coming generation.
 
Add in hardware support for the supersampling of specific portions of shaders only. Somewhat similar to a generalized PCF setup.
 
The future, eh?

Well, if we're going to spend lots of silicon and bandwidth on it in any case, how about breaking everything down to triangles that are at most half a pixel in size, and adding the color to the framebuffer according to the area of coverage? Spend it on vertex units instead?

;)
 
Simon F said:
As an exercise to the reader, try, say, 10k samples per pixel but use a box filter....
Do you want to say that a certain amount of blurring can be desired?

I still have problems to agree. If we define a pixel as an integral of the colors of the pixel's area, there should be absolutely no colours taken from outside that area into the final pixel color.

Other definitions of a pixel are possible. If we define a pixel as an acutal point, a certain blurring can be used to decrease possible aliasing. But aliasing should be avioided by correct texture sampling as well as taking enough geometry subpixels arranged in a smart pattern, and not realized with blurring. Blurring only distributes the amount of aliasing of a certain pixel in a wider area, making it less visible in the blurred pixel, but expand the remaining aliasing effect to a wider area.

An AA pattern should be "fully sparse", and still have no big uncovered "holes", the downfiltering should be sRGB corrected. With this approach, 8x AA can deliver a real nice edge smoothing with nearly no room for visible improvement. A TBDR architecture should be able to provide even 16x sparse AA in realtime.

Sampling right on the border (or edge) of a pixel might be considered as "useful blurring", but I don't like that either. It makes it impossible to have single-pixel details in geometry.

Aliasing occurs if the sampled signal contains frequencies (equal or) greater than the sampling_frequency / 2. The sampling frequency is determinated by the intervals of the (sub)samples, not by the number of subsamples per sample. Using additional (sub)samples from pixels outside results in a loss of high frequencies but cannot fight aliasing. Try to sample a pure sinus wave with violating the nyquist-shannon-criterion: Even with blurring you will not be able to get a better (more accurate) result. The only thing that fights aliasing is to use either a low-pass on the input signal or increase the sampling frequency.

While blurring acts as an lowpass filter, it also damages the "real" content of the signal. If we have very, very little blurring, the effect is very little also. To have a noticable effect, one has to use quite strong blurring. Therefore I consider blurring as not desirable.

Synthetical antialiasing tests with a checkboard pattern are not reflecting real 3d world geometry. If the pixel-to-checkboard-area-ratio gets below 1.0, one should see a pure gray pixel, without noise (only reachable with a lowpass on input signal.) This can be better achived with blurring than without, still, this is no "valid" test for real world circumstances. Even with texturing, as long as we dont have exact 1 (or power-of-two) checkboard-field per texel, we will have not pure gray without noise. While blurring can sometimes improve the visual quality for very small triangles, it still hurts the rendering of normal-sized polygons.

(Since this posting is another one acutally wrote in German with englisch words, I welcome any correction or proposal of better wording.)
 
Last edited by a moderator:
Btw, I wasn't directly thinking of REYES as such for the whole screen, because you only have to do that at boundaries. But (thinking of texture blurring), I'm not sure if that would be easier.

But I think it might turn out, that the best anti-aliasing approach would be to use a mixed approach: use texure filtering for most of the coverage, and tesselate surfaces into smaller and smaller triangles the more you approach the edges, so you end up at very small ones at the edge boundaries.

Like adaptive REYES anti-aliasing. And that might be possible with unified GPUs and DX10. Next year.

;)
 
aths said:
Do you want to say that a certain amount of blurring can be desired?

I still have problems to agree. If we define a pixel as an integral of the colors of the pixel's area, there should be absolutely no colours taken from outside that area into the final pixel color.

At this point you need to repeat to yourself the words of Alvy Ray Smith.

A Pixel Is Not A Little Square, A Pixel Is Not A Little Square, A Pixel Is Not A Little Square!



Aliasing occurs if the sampled signal contains frequencies (equal or) greater than the sampling_frequency / 2. The sampling frequency is determinated by the intervals of the (sub)samples, not by the number of subsamples per sample. Using additional (sub)samples from pixels outside results in a loss of high frequencies but cannot fight aliasing. Try to sample a pure sinus wave with violating the nyquist-shannon-criterion: Even with blurring you will not be able to get a better (more accurate) result. The only thing that fights aliasing is to use either a low-pass on the input signal or increase the sampling frequency.
Ahh.. but what about thr reconstruction? You should also be considering that.

While blurring acts as an lowpass filter, it also damages the "real" content of the signal. If we have very, very little blurring, the effect is very little also. To have a noticable effect, one has to use quite strong blurring. Therefore I consider blurring as not desirable.
But, once again, you can't correctly reconstruct the signal.
 
Last edited by a moderator:
aths said:
Do you want to say that a certain amount of blurring can be desired?

I still have problems to agree. If we define a pixel as an integral of the colors of the pixel's area, there should be absolutely no colours taken from outside that area into the final pixel color.
Using a box filter is not necessarily the best one can do. Taking the integral over the pixel as a little square corresponds to first taking samples of the frame at a very high sample rate, then perform a box/averaging filter on the samples taken, then sample the filtered waveform. A box filter is generally considered a quite bad lowpass filter; a windowed-sinc filter (or for that matter, even just a tent filter or a gaussian-filter) does a much better job of getting rid of unwanted excess frequencies while not modifying frequencies below the Nyquist limit. The box filter also damages phase information; for the example of a line that is <0.5 pixel thick, if you move the line slowly, this loss of phase information causes the movement of the line to appear jumpy.
 
Simon F said:
At this point you need to repeat to yourself the words of Alvy Ray Smith.

A Pixel Is Not A Little Square, A Pixel Is Not A Little Square, A Pixel Is Not A Little Square!
Previously, I already had some problems to accept that a texel is not a square. But foto-textures texel are actually squares (or can be considered as, in certain circumstances.) While texels are treatet like sampling points, they can represent squares. For my printed article series about texture filtering I handle texels as points, but pixels still as square.

I have to admit that I not fully understand Alvy Smith's work. As far as I understand, he seperates what a pixel is (a point) and how a pixel is displayed (as rectangle or square.) He discusses reconstruction filters. Since a monitor can be considered as a digital output device (at least an LCD) I don't think that "normal" reconstruction filters like for audio signals should be applied to images.

It looks like I have no choice but to print his papers and read them in a relaxing environment.

About 4x sparse downfiltering, in a previsous work I tried to reason with a "catchment area" of subpixel. http://www.3dcenter.de/artikel/anti-aliasing/pic5.php (I have to re-make that image for a real sparse grid, but the main idea is still visible.)

In *all* my works, I looked on a pixel as a square: http://www.3dcenter.de/artikel/anti-aliasing/index03.php, http://www.3dcenter.de/artikel/anti-aliasing/index04.php and so on.

Simon F said:
Ahh.. but what about thr reconstruction? You should also be considering that.
Since we have a fixed grid, we always have problems with Moiré patterns. To get rid of visible Moiré patterns, we have to use low-frequency content only. This applies to texture filtering, which can be a real pain in the ass to reason why a given formula is "the best" one.

Simon F said:
But, once again, you can't correctly reconstruct the signal.
I can't anyway. While this samples sinus wave http://www.3dcenter.de/images/anti-aliasing/sinus4.png as an audio signal can be reconstructed to a perfect sinus again, this is not possible with the limited resolution of an image output device.

With multisampling, any blurring destroys sampled texture information. Since we can only take information from near the border of the neighbor pixel, the color error value for that subpixel is quite high.
 
Last edited by a moderator:
arjan de lumens said:
Using a box filter is not necessarily the best one can do. Taking the integral over the pixel as a little square corresponds to first taking samples of the frame at a very high sample rate, then perform a box/averaging filter on the samples taken, then sample the filtered waveform. A box filter is generally considered a quite bad lowpass filter; a windowed-sinc filter (or for that matter, even just a tent filter or a gaussian-filter) does a much better job of getting rid of unwanted excess frequencies while not modifying frequencies below the Nyquist limit. The box filter also damages phase information; for the example of a line that is <0.5 pixel thick, if you move the line slowly, this loss of phase information causes the movement of the line to appear jumpy.
Yes. The question is, if this behaviour is correct or not. As long as an edge only coveres a given pixel, I don't see why the neighbor pixel should also be used to display the edge. While a certain blurring may be lowers time-based aliasing, it still lowers the sharpness of any single picture.

With a similar argument I still fight the idea of using bicubic filter for upsampling. While it may be looks "better" (sharper), we don't only have the need of clamping colors, I also don't see a good reason to use more samples than the 4 surrounding of any given coordinate. If we have a photograph, the neighbor's neighbor should not have any influence anyway, that color value can be fully independent.

It's another matter with downscaling, though. I consider it as a disadvantage of trilinear filtering with box-filtered mipmaps, that we loose much spatial information. We have an offset each new miplevel. This could be fighted with a wider filter kernel. But then we loose additional image sharpness.
 
Last edited by a moderator:
aths said:
Previously, I already had some problems to accept that a texel is not a square. But foto-textures texel are actually squares (or can be considered as, in certain circumstances.)
Well, here's one way I think about it.

Imagine a program that renders a black line on a white background that is just barely narrow enough that it never slips between the samples of a sparse grid that is being used for MSAA. This line is horizontal, and slowly creeping up the screen.

Now, consider what's going to happen with basic MSAA here: each pixel that includes the line will be the same shade of grey, at all times. As it moves up, the grey pixels will suddenly jump.

That's not what you want when viewing a line moving up the screen. What you want is a smooth transition between pixels. Here is where a gaussian filter would look better: the next pixel that the line is to move into starts turning grey before the line leaves the current pixel, creating a smooth transition.
 
Chalnoth said:
Well, here's one way I think about it.

Imagine a program that renders a black line on a white background that is just barely narrow enough that it never slips between the samples of a sparse grid that is being used for MSAA. This line is horizontal, and slowly creeping up the screen.

Now, consider what's going to happen with basic MSAA here: each pixel that includes the line will be the same shade of grey, at all times. As it moves up, the grey pixels will suddenly jump.

That's not what you want when viewing a line moving up the screen. What you want is a smooth transition between pixels. Here is where a gaussian filter would look better: the next pixel that the line is to move into starts turning grey before the line leaves the current pixel, creating a smooth transition.
It leads to a smoother, not fully smooth transition, but yes, it will be smoother. Then again, blurring hurts the high frequency content. If we have an advantage bought with a disadvantage we have to discuss which weights more. Textures will suffer heavily with blurred multisampling. Supersampling is out of the question due to its costs.

If we consider a pixel a point I agree that box filtering is a rough approximation only. I am still thinking of a pixel as a square to be represented. Even if we consider a pixel a point, blurring is bad if we apply multisample-AA only.
 
Please correct me if I'm wrong here, but it seems that a simple application of Nyquist theorem implies that to eliminate aliasing, one should implement a filter at half the spatial frequency of the pixel pitch, in other words, 2 pixels wide.
ERK
 
aths said:
Yes. The question is, if this behaviour is correct or not. As long as an edge only coveres a given pixel, I don't see why the neighbor pixel should also be used to display the edge. While a certain blurring may be lowers time-based aliasing, it still lowers the sharpness of any single picture.
Much of the apparent sharpness that you might retain by using the box filter instead of the windowed-sinc filter is due to above-Nyquist frequency content that is not eliminated by the box filter :!: To see just how badly a box filter can fail to eliminate high-frequency content, consider the following pattern:
Code:
.9.0.9.0.9.0.9.0.9.0.9.0.9.0.9
.9.0.9.0.9.0.9.0.9.0.9.0.9.0.9
.9.0.9.0.9.0.9.0.9.0.9.0.9.0.9
(the numbers correspond to intensity in each sample of a 15x3 grid).
Now, let us perform 3x3 downsampling. The .9.0.9.0.9.0.9.0.9... pattern corresponds to a waveform consisting of a DC term of 4.5 and a high-frequency sine-wave. Since the sine wave has a frequency that, after downsampling, is FAR above the Nyquist limit, it should not appear in the downsampled result. Now, let's do the dowsampling with a 3x3 box filter:
Code:
.6.3.6.3.6
As you can see, the sine wave is not remotely close to gone, it is just aliased down to another frequency.

With a similar argument I still fight the idea of using bicubic filter for upsampling. While it may be looks "better" (sharper), we don't only have the need of clamping colors, I also don't see a good reason to use more samples than the 4 surrounding of any given coordinate. If we have a photograph, the neighbor's neighbor should not have any influence anyway, that color value can be fully independent.
If the color values in your pixmap are fully independent everywhere in the entire picture, then you have a picture that contains nothing but random noise. A human-recognizable feature in a picture generally relies on a fairly large region around any given pixel; upon upscaling an image, you would ideally wish to take an entire such "region" into consideration in order to reproduce the feature as faithfully as possible. As such, it should be rather clear that you can do much better than just plain bilinear. This is in particularly true if the amount of upsampling done is very large (an "ideal" algorithm would in this case probably just look blurred as if it is seriously out of focus; a bilinear-filtered upsampling will be chock full of highly unrealistic Mach bands everywhere.)
 
Nysquist is irrelevant, as are sinc filters ... there are no ideal reconstruction filters. That's the wonderfull thing about image processing for the purpose of display, it's more of an art than a science.
 
Back
Top