The Future of Anti Aliasing? Will the Quincunx-Like Blur Technologies Return?

Farid

Artist formely known as Vysez
Veteran
Supporter
I stumbled across an old article about Anti Aliasing methods, the article is extremely amusing when you think about it. But since the person who wrote it back then would probably break down and cry today, I'll not link to it. The "usuals" at the B3D IRC chan got to laugh at it though...

Anyway, this article did remind me that some of the Nvidia supporter, back in the days, used to say that Quincunx type of AA, using blur filters, were bound to be used in the future. They were also eager to point out that the Nvidia's Quincunx implementation, since the NV2X times, leaved a lot to be desired.

Now, a few years later, I'm interested in hearing what's the consencus, if any, on the "blur filter AA" and what has become of the opinion of those who thought that Quincunx wasn't a dead-end. Did they change their mind, or are they clinging on the "More samples will give Quincunx a boost in quality!".

While, I'm certain that more the samples, the better is the quality, the question would be: would this Super Quincunx worth its transistors price, once compared to other MSAA methods?
 
As an exercise to the reader, try, say, 10k samples per pixel but use a box filter....
 
  • Like
Reactions: Geo
I thought Quincunx was an interesting option at the time. Remember when it came out most stuff couldn't be rendered at 1024x768 with 4X AA, plus with the GF3 and GF4 products using 4X ordered grid AA, cunx had some good advantages. Most didn't notice the texture blurriness, just that it was antialiased at about the same level as 4X, but with performance of 2X.

Now that everything is sparse sampled and rotated grid, Quincunx looks bad. I don't think that either company will go back with a blur filter, as there is just too much quality that is given up.

Makes me curious though... cause this 7900 GTX still has Quincunx AA in the control panel. I might have to see what I think of this latest version...
 
Simon F said:
As an exercise to the reader, try, say, 10k samples per pixel but use a box filter....

I tried, but my head exploded. 10k you say? Per pixel?
 
Anarchy Online and Rising Force Online mmorpg's got something as this in them, both got "blur", AO only in ATI, while NVIDIA give an "lego brick" effect with the effect option (no idea why). Both got started in 1999 and was "TNT 2" requirment games, etc.

But, omg, the RFO blur is so extremely ugly i hope nothing who ever looks as it ever return. Ive been using "blur AA" every day for years now in AO (some got no options, and it looks good there, with ATI cards).
 
geo said:
I tried, but my head exploded. 10k you say? Per pixel?
Yes... I've even simulated it as a demonstration, but it can be treated as a thought experiment. Imagine you have some animated (vertical) lines that are quite thin (e.g. ~1/4 of a pixel wide) and are slowly moving across the screen. If you used a box filter, what would you see?
 
FLIPQUAD (2 samples) and FLIPTRI (1.25 samples) offer increased apparent quality at a very low cost. They are obviously aimed at embedded/mobile devices, but I think it's impressive to see that this quality can be achieved using this little samples. It makes me wonder whether there's any sheme with, say, 3 samples that looks as good as a more brute-force approach used on current hardware.
 
I have made some video to demonstrate the effect of the Gaussian filter in anti-aliasing years ago. They are still here (these are for gamma=2.2 monitors).

The .m1v files are MPEG-1 video files. If your media player refuse to play them, change the extensions to .mpg.
 
Well, I fired up Quincunx last night with the 7900 GTX, and it was pretty amazing how poor the quality was. Edge AA was fine, but I forgot how much it really blurred texturing. When running at 1024x768, the blur wasn't as noticeable, but once higher resolutions were used (like 1920x1200) then the scene just turned into a mess. I wonder why NVIDIA even includes that pattern anymore?

I can see why NV did it in the first place with AA performance being in its first stages (basically poor). Higher quality edge AA at the cost of texture crispness, all while causing the same performance hit as 2A. Pretty decent tradeoff at the time all things considered, and I know I used that method quite a bit. Now that we are at the point where we have the horses to do AA with much higher resolutions, it makes sense just to drop that AA type.
 
JoshMST said:
Well, I fired up Quincunx last night with the 7900 GTX, and it was pretty amazing how poor the quality was. Edge AA was fine, but I forgot how much it really blurred texturing.

That's one inconvenient of multisampling actually.

Because the function is weighted average(samples on several pixels) and because in multisampling the samples are equal on the area of one single pixel (when using texturing to display text for example), then the function equals to a average(neighboring pixels), which is a blur of an aliased version of this part of the screen (not good..). This is of course not true on triangle edges and collisions which get the full benefit of the extra number of samples.

Some programs that use posprocessing or render to an offscreen surface as part of their rendering will have it worse unfortunately, because the resolve pass cannot be applied twice and keep the same result for obvious reasons.

That's two of the main reasons why a resolve filter with a support of more than one pixel is harder to do for accelerated rendering. But in theory the benefits are great and it's always used for offline rendering (think Pixar etc.).

LeGreg
 
Simon F said:
Yes... I've even simulated it as a demonstration, but it can be treated as a thought experiment. Imagine you have some animated (vertical) lines that are quite thin (e.g. ~1/4 of a pixel wide) and are slowly moving across the screen. If you used a box filter, what would you see?

To answer Simons question and point out why you need more than a simple box filter even with very large numbers of samples.
You'd see the screen pulse, because the number of set subpixels/pixel would vary as the lines move. Obviously what is expected,is a constant shade of grey.
 
no-X said:
And what about AoE3 on G7x with AA+HDR? Supersampling 1,5x1,5 causes nice blurring, similar to quincunx. Just scroll down.
That's a poor scene to compare antialiasing quality, but from the overall impression of this particular screenshot, I'd say the NV one looks better, even though the polygon edges are worse obviously.


It's a difficult topic, because it's not immediately obvious why "blur", i.e. having a filter kernel that extends over the bounds of a pixel, could make sense. If you imagine the pixel grid as a lattice with light falling through, shouldn't a pixel represent the sum of all the photons falling through it?

But as Simon said, if you have lots of samples and a box filter, and you have a line that's half a pixel wide and slowly moving, you don't even see any motion half of the time because the line only moves inside the pixels.
That's because e.g. the effect of a sample in the top left corner of a pixel only extends in one direction: to the lower right. It is comparable to mipmap generation with a box filter, on a texture that has a high-contrast vertical line one texel to the right of the center. In the lower mipmaps, this line will "spread" to the right, but not to the left.

If we imagine incoming photons as having an area of effect, a filter kernel extending over the borders of a pixel makes much more sense.



ERP said:
To answer Simons question and point out why you need more than a simple box filter even with very large numbers of samples.
You'd see the screen pulse, because the number of set subpixels/pixel would vary as the lines move. Obviously what is expected,is a constant shade of grey.
If we're talking hundreds or even thousands of samples, this effect is practically invisible, and the number of samples covered should be almost constant. And with gamma correct downsampling, overall brightness is constant as well.

However, what is noticeable is the "jerky" motion of lines. Look at pcchen's animations.
 
Last edited by a moderator:
I know this thread was started with the premise of looking forward however I can't but help look backward and think of probally my favorite form of fsaa of yesteryear..

Supersampling .. I know it's very VERY costly however the quality was 2nd to none (well expecially when the only other widely used form is msaa). Given the huge amount of Vram cards are shipping with (512MB .. 1GB ..?) and the powerful processing power of the GPUs today in combination with the increasing memory clocks (thus increasing bandwidth) .. why.. WHY is there no option to use SSAA anymore particularly in older (less taxing) titles. ?? IIRC, ATI's last SSAA part was the 8500 series, with the introduction of the R300 (9700) ATI abandoned SSAA altogether. Todays mainstream cards are shipping with 4-8x the amount of ram the 8500 started with and even todays lowest value parts are probally at least equal to (if not greater) than the 8500/GF3 performance.

Recently I do recall that Adaptive AA was enabled by ATI for their 1x00 series and through a registry hack one could enable it for older x800 series as well. IIRC, Adaptive AA uses MSAA for the scene except when alphachannel is concerned, in that case it is my understanding that the AA method used is akin to Supersampling to better reproduce effects such as chain fences, tree limbs and telephone wires. I think (please feel free to correct me if I am wrong in my understanding) that the SSAA sample rate is set to 2x when for example 4xMSAA is used. Obviously using this type of SSAA like method incurs a relatively large performance decrease due to the nature of SSAA.

Of recent there are a handful of games that truly tax even the most bleeding edge hardware setups,.. ie Oblivion.. FEAR .. etc .. and in the case of those games sure SSAA use would have to be limited just to make those games playable however there are 100's of other games that I think could benefit greatly by the use of SSAA in which not just edges are antialiased so are textures and alpha, thus giving the consumer the best possible quality option.


So to re-iterate... (sung in the toon "Money for Nothing" ..)

"I want my SSAA.."
 
Xmas said:
If we're talking hundreds or even thousands of samples, this effect is practically invisible, and the number of samples covered should be almost constant. And with gamma correct downsampling, overall brightness is constant as well.

However, what is noticeable is the "jerky" motion of lines. Look at pcchen's animations.

Oh, darn. Xmas beat me to it.


<whew!>
 
FrameBuffer> Supersampling .. I know it's very VERY costly

Ghost of D3D> In the past, everyone also agreed that "full screen depth buffering is VERY costly". Look what has happened!

Depth buffering was costly primarily because memory was expensive. Nowadays, depth buffering is cheap because memory is cheap *and* because depth buffering still uses linear interpolation.

Pixel rendering uses increasingly complex shaders to implement increasing complex non-linear functions. When the shader time to render each sample is the bottleneck, increasing the number of samples rendered has a huge effect on performance. This won't change until we start producing images in a very different way.

I don't believe that supersampling is any better than MSAA for edge AA. Supersampling is better for fixing color artifacts within a surface (MSAA does nothing for that problem), but is it worth the cost? I once watched a demonstration of tiny moving highlights that were rendered with 256 samples per pixel. There was still flickering.

Using multiple point samples to simulate area sampling works less and less well as the function being sampled gets more and more non-linear, so I suggest that a better solution for computing nonlinear functions for pixel colors is to write shaders that estimate the color over the pixel area, instead of shaders that compute the color at lots of point samples.

Enjoy, Aranfell
 
Here's Simon's old experiment:

http://web.onetel.net.uk/~simonnihal/assorted3d/samppat.html

Last one is 10k samples afaik. No idea though it those contained any filters or not.

That is Supersampling though and it normally comes with a LOD offset. With 16x samples I would expect an offset of -2.0, in which case I wouldn't mind any filter.

Multisampling doesn't come with any LOD offsets though since it doesn't touch polygon interior data. If there any discussed filters here result to something close to Quincunx or even 4x 9-tap (Lord help), then I'm not particularly exited about the idea.

If it would be possible to apply such filters only on polygon edges/intersections while coupled with MSAA, then of course I wouldn't have any objections at all.
 
Back
Top