Alternative AA methods and their comparison with traditional MSAA*

Sure, but MSAA is already not "brute force", and compression + coverage sampling have been around for ages and largely address the argument as well. I agree taking the reconstruction filter "beyond" the pixel level (anyone remember quincunx? Hehe) is useful, but an ordered grid is not enough. I'll say that more forcefully: filters that use purely ordered grids (such as any of the current screen-space AA methods) are not efficient. We've known that one for ages :)

It's optimized brute force. Give it sufficiently dense geometry and performance goes down to SSAA. For every bit of extra coverage precision you need you double the memory consumption. That's pretty darn brute force to me and hardly an elegant solution.
 
FXAA demo looks good, but if you zoom way out so the fence is single pixel width....the AA disappears and you end up with crappy IQ. Are there any fast AA methods that will work with sub pixel features? It would be nice to prevent the awful shimmering moire patterns on things like fences/grates at a distance.
 
Humus, you can merge MSAA fragments from the same surface.

Given sufficiently high frequency scenery the assumption that neighbouring pixels are a good approximation of underlying surfaces goes down the shitter too.
 
Post processes are no replacement for extra samples. As shader and geometry details keep increasing, everyone will see the difference. Post AA will make everything blurry eventually, and break down on a lot of things.
MSAA isn't the final answer either, of course.
 
Hmm, it would actually be much better if somehow the AA algorithm had knowledge of high frequency geometry..and would purposely blurr them out at a distance. Blurring is much preferable to that horrible shimmering on distant objects that would be too small to see detail on anyhow.
 
Hmm, it would actually be much better if somehow the AA algorithm had knowledge of high frequency geometry..and would purposely blurr them out at a distance. Blurring is much preferable to that horrible shimmering on distant objects that would be too small to see detail on anyhow.
What you are asking is basically pre-filtered geometry. (think mipmapping.)

One example of this can be seen in gigavoxel papers.
http://artis.imag.fr/Publications/2009/CNLSE09/
 
Of course this is supersampling, but you have pseudo random distribution and you trade aliasing for noise, which humans are less sensitive too.
An alternative would be to vary the pattern randomly for each pixel to get stochastic sampling. (And store that info.)

I wonder why we don't have that in hardware, it's likely IHV have studied that solution, so if anyone know why it was rejected, I'm all ears...
You do have to be careful that you don't run into problems with different patterns per pixel for cases where the additional noise on the edge can actually make it look worse.
 
Of course this is supersampling, but you have pseudo random distribution and you trade aliasing for noise, which humans are less sensitive too.
The issue is that you need a bunch of samples before that starts becoming true... like 16+ typically, and people are barely willing to spring for the cost of 4 these days let alone more.

You definitely can vary the sample locations per pixel at little cost by as Simon notes unless you have some minimum number of samples you can easily do more harm than good.

It's optimized brute force. Give it sufficiently dense geometry and performance goes down to SSAA. For every bit of extra coverage precision you need you double the memory consumption. That's pretty darn brute force to me and hardly an elegant solution.
Given "sufficiently dense geometry" rasterization itself is inelegant and expensive. The point is you need non-grid sampling or you simply have to do way too much work to get a sufficiently antialiased result. Given a target quality, stochastic/jittered/irregular samples require less work, are "more efficient" and thus I would argue more elegant than uniform sampling.

And yeah, tiny triangles are sucky. But they're sucky throughout the entire current rendering pipeline, not just AA. I'd argue it's still not totally clear that rasterizing tiny triangles are an efficient way to increase quality either. They certainly aren't on current GPUs.

I'll further note that "deferred MSAA" does not suffer from having to shade at sample frequency at triangle edges. Indeed you use similar planarity tests (or whatever) to decide where you want to do the extra shading. But the latter point is key... you do have to adaptively do some extra shading or else your quality is just going to be poor. You just don't have to dumbly do it at every internal triangle edge.
 
Last edited by a moderator:
The issue is that you need a bunch of samples before that starts becoming true... like 16+ typically, and people are barely willing to spring for the cost of 4 these days let alone more.

You definitely can vary the sample locations per pixel at little cost by as Simon notes unless you have some minimum number of samples you can easily do more harm than good.
Agreed. I don't remember the exact experiment, but years ago I varied samples per pixel in a c model and ran some game scenes and the noise seems worse than normal aliasing to me. I only looked at single frames though so maybe motion would have changed my opinion. I didn't test higher than 16x AA.
 
What was the name of that method which stored 3 samples + coverage masks per pixel again? Maybe with DX10.1 and DX11 hardware it's time to dust that shit off again and take a new look.
 
This morning at work I took a look at the FXAA shader, and spent an hour or two cleaning it up and and integrating it into our engine. It has most of your normal post-AA issues (edges still crawl, and since it's based on luminosity it just totally misses edges that don't have a strong gradient in the green channel), but I'd say it compares favorably to MLAA or DLAA. Definitely not bad for a drop-in solution!
 
This morning at work I took a look at the FXAA shader, and spent an hour or two cleaning it up and and integrating it into our engine. It has most of your normal post-AA issues (edges still crawl, and since it's based on luminosity it just totally misses edges that don't have a strong gradient in the green channel), but I'd say it compares favorably to MLAA or DLAA. Definitely not bad for a drop-in solution!

An hour or two? You must have done something interesting to it, care to share?
 
An hour or two? You must have done something interesting to it, care to share?

Oh nothing interesting...mostly just fitting it into our shader permutation system and changing the way it handles gamma correction. :p
 
It's optimized brute force. Give it sufficiently dense geometry and performance goes down to SSAA. For every bit of extra coverage precision you need you double the memory consumption. That's pretty darn brute force to me and hardly an elegant solution.
While storage grows linearly with the number of visibility samples memory BW doesn't thanks to caching and compression. MSAA is also a robust/predictable algorithm, with well understood failure modes. Other algorithms..not so much :)

No doubt it's possible to do better, but I wouldn't characterize MSAA in such negative terms. First HW implementations were definitely using brute force, current implementations are way smarter. Moreover one can do clever things on top of MSAA, see Andrew's work with deferred shading, IIRC also adopted by DICE for BF3.

Said that I am also looking with interest at what this new wave of alternative AA methods will bring to the table.
 
Last edited:
http://www.iryoku.com/aacourse/#top

Seems like there will be alot of new AA solutions @siggraph
Tentative Schedule0:00 Introduction Diego Gutierrez / Jorge Jimenez
0:05 A Directionally Adaptive Edge Anti-Aliasing Filter Jason Yang
0:20 Morphological Anti-Aliasing (MLAA) Alexander Reshetov
0:35 MLAAQ: GPU MLAA with Quality Improvements Jorge Jimenez
0:50 Hybrid CPU/GPU MLAA on the Xbox-360 Pete Demoreuille
1:05 God of War 3 Integration Cedric Perthuis
1:20 PlayStation Edge MLAA Tobias Berghoff
1:35 The Saboteur Anti-Aliasing (SPUAA) Henry Yu
1:50 Break

2:05 Subpixel Reconstruction Antialiasing (SRAA) Morgan McGuire
2:20 Fast approXimate Anti-Aliasing (FXAA) Timothy Lottes
2:35 Distance-to-edge Anti-Aliasing (DEAA) Hugh Malan
2:50 Directionally Localized Anti-Aliasing (DLAA) Dmitry Andreev
3:05 Crysis 2 Anti-Aliasing Tiago Sousa
3:20 Wrap-up and Discussion / Q & A
3:30 Close
 
Wow, that's a lot of different methods, it'll be interesting to see if we can find a clear winner.
 
Back
Top