Alternative AA methods and their comparison with traditional MSAA*

I think that it's completely free to take the same concept, I'm not aware of any patents on the general algorithm and game devs are usually against it anyway. The actual implementation and source code is probably confidential and property of LucasArts, but it hasn't been released or included in the presentation anyway.

Also keep in mind that it's one thing to get the general algorithm right and another to have it run optimized enough to be practical. The first straight implementation took like 8ms and the final is 1.6 +/-0.3 ms or so. The exact details are left to anyone trying to use it and that is significant work as well, even though the really clever thing was coming up with the directional blur idea itself.

So it'll probably become another option from now on to every developer looking for a post process AA solution. Some may still stick with MLAA, or temporal AA, or anything else, because it all depends on what else their engine is doing with the hardware resources and how far along they are with their development.
Maybe we'll see it as soon as BF3, or maybe only in Codename Kingdoms because there's more time to experiment with it before the release date; or maybe we won't see it in any other game at all because it won't be practical enough for anyone else.
 
Indeed,results are really good and stability is quite impressive in comparison with MLAA.Just wondering...Can other developers copy what LA already done or what?I mean,code is not for everyone,right?
There's a download link for demo and sourcecode. I don't see any restrictions and I have the option to download it myself. SW:FU2 had some some very novel tech that they ought to be commended on, with framerate upscaling and this AA technique.

This is a nice approach, and goes to show how different disciplines bring different insights. A lot of this new AA investigation since MLAA was obvious to me when I was toying with an Photoshop antialiasing plugin for masks, taking a monochrome image and smoothing the jaggies. The 2D space gives a whole different way of looking at things.

The Holy Grail of this approach is to evaluate each step and individually interpolate along its length, whereas DLAA is having to pick two constant length blurs. Perhaps a smarter person than I can see how to evaluate edges into into pieces of irregular length, with nodes at either end, and effectively draw an AA'd line from point to point? It should even be possible to use curves and get more accurate edge AA than straight lines, although that'd make very little difference alongside the natural polygonisation of objects so probably wouldn't be worth the cost.
 
Still makes the really shallow angled edges look like blurred steps ... there is no substitute for coverage information computed during rasterization.

Shame NV_coverage_sample is NVIDIA only.
 
SRAA eliminates some mlaa artifacts due to presence of subsamples, but unfortunately introduces the new one due to absence of color information. FXAA II looks quite good as well as DLAA, but slightly cheaper, can't wait to see FXAAI implemented in driver(in Crysis 2 fxaa doing good job)
 
FXAA is twice cheaper and looks somewhat similar to DLAA but it seems that artifacts that get in the way cause light sources to look somewhat muted and curved edges look a bit blurred.Less so on FXAA but still there(muted lights).

Still 1.3ms is less than 4% of frame budget and results are quite nice.Yeah,I think post process AA is way to go from now on,especially since DR engines are taking over.
 
Yeah,I think post process AA is way to go from now on,especially since DR engines are taking over.
It's a useful tool but it's still not enough by itself. The "way to go" in the long run will almost certainly involve adaptive sub-sampling. DR and adaptive schemes like MSAA work fine on suitably modern hardware - there's no reason to restrict oneself to pixel-frequency data.
 
Indeed. Seems kinda backward that people are looking to post AA when power has been increasing such that subsampling is more of a possiblity than it's ever been. To lost subsample resolution would be a significant step backwards!
 
Is creating a list of all the samples and subsamples for fragmented pixels and then doing deferred shading in a compute shader really a big deal on DX11 hardware?

Hell ... even with DirectX 10.1 there have to be tricks to only do deferred shading for subsamples of fragmented pixels, without branching.
 
Last edited by a moderator:
It's a useful tool but it's still not enough by itself. The "way to go" in the long run will almost certainly involve adaptive sub-sampling. DR and adaptive schemes like MSAA work fine on suitably modern hardware - there's no reason to restrict oneself to pixel-frequency data.
I was speaking strictly for consoles,my bad,should have mentioned it.

Sure nothing beats hardware AA on PC hardware and any sort of post process AA(not sure about SRAA?) shouldn't be seen as "way to go".But on consoles I think that will become the case since DR engines and MSAA are no no and MSAA cost on PS3 is quite high and so is on 360 now.It wasn't the case 3-4 years ago but now it is and I think that good post process AA will give better results than hardware 2xMSAA for what seems to be lower cost.
 
Indeed. Seems kinda backward that people are looking to post AA when power has been increasing such that subsampling is more of a possiblity than it's ever been. To lost subsample resolution would be a significant step backwards!

A part of the problem is that the overhead of MSAA keeps increasing as geometric density is increasing. As more and more engines switch to deferred shading the overhead and complexity added by MSAA is further multiplied. Not to mention the memory overhead. Not a big problem on PC, but for consoles it's increasingly hard to motivate doubling the size of render targets, especially so if you are doing deferred shading since you have many more targets to double the size of. If you have four buffers of 1280x720 then you have 14MB overhead for MSAA 2x, which still provides a little too little antialiasing IMHO.

Personally I think subpixel samples are a bit overrated. What most people are looking for in an antialiasing algorithm is to get rid of jaggies. For the overhead MSAA does a rather poor job at it and introduces a lot of complexities to the game engine, especially if you do deferred. So it doesn't surprise me that post-AA seems to be gaining ground.
 
Is creating a list of all the samples and subsamples for fragmented pixels and then doing deferred shading in a compute shader really a big deal on DX11 hardware?
It's a bit expensive right now due to some overheads, but there's no fundamental reason that it has to be. I doubt that it will continue to be in the future.

Hell ... even with DirectX 10.1 there have to be tricks to only do deferred shading for subsamples of fragmented pixels, without branching.
Yeah see here. Even a simple rescheduling of sub-sample shading produces performance similar to forward MSAA.

A part of the problem is that the overhead of MSAA keeps increasing as geometric density is increasing.
It doesn't have to though - see above. Particularly with deferred you don't just do the naive thing and AA at every triangle boundary - you find continuous surface regions, which is close to the ideal.

Not to mention the memory overhead.
Yeah this is an issue, but in the long run it can probably be solved acceptably by virtual memory and MSAA "compression". Even now if you're doing tile-based DS it's disturbingly practical at high resolutions on PC, despite the memory footprint. Integrated parts don't have as big an issue with the memory footprint either (although bandwidth is an issue, but easier to address). Certainly a problem for current consoles though of course.

Personally I think subpixel samples are a bit overrated. What most people are looking for in an antialiasing algorithm is to get rid of jaggies.
I'm not so sure... the "simple" jaggies are not the most obvious ones IMHO. It's the big seas of noise (a la Crysis 1 style) that are really distracting, and those are actually made worse by many screen-space AA methods.

I agree though that for the simple edges screen-space AA is an efficient way to handle them, thus I imagine what we'll settle on in the future is a reconstruction filter of description that adaptively accesses sub-pixel data in "hard" regions.
 
Last edited by a moderator:
Personally I think subpixel samples are a bit overrated. What most people are looking for in an antialiasing algorithm is to get rid of jaggies. For the overhead MSAA does a rather poor job at it and introduces a lot of complexities to the game engine, especially if you do deferred. So it doesn't surprise me that post-AA seems to be gaining ground.
Temporal aliasing is a big issue where sub-pixel sized polygons alternate between visible and invisible given sampling points. Shimmering can't be dealt with by straight image reconstruction techniques, and temporal technqiues produce different artefacts. So ideally in any given image, where a triangle is less than a pixel in width, it'll be sampled at lots of points. But for the straight-forward jaggies on edges of larger triangle, reconstruction will be visual no different, so worth doing. We can't abandon all flavours of supersampling for reconstruction methods though.

At the moment the best place seem the happy middle between selective multisampling and image 'reconstruction'. For the latter I'd like to see someone try a better line detection and drawing system that provides an individual per-step interpolation.
 
Games still haven't really touched the most brutal fields of aliasing, some of which can't be dealt with in post processing at all. As the detail levels continue to increase, supersampling (although adaptive) will eventually become inevitable for AA.

The problem is that with offline rendering, you always have the option to add more hardware to make your deadlines. You can't do that with games and as the hardware gets older you have no other option but to compromise in order to get better overall visuals out of the system.
 
It's not so much about getting rid of subpixel info that is making people happy but the mass exploration of new approaches that has got people excited. WE know we can do it through supersampling but that is the dumb brute force approach. It's about working smarter not harder,
 
It's not so much about getting rid of subpixel info that is making people happy but the mass exploration of new approaches that has got people excited. WE know we can do it through supersampling but that is the dumb brute force approach. It's about working smarter not harder,
Sure, but MSAA is already not "brute force", and compression + coverage sampling have been around for ages and largely address the argument as well. I agree taking the reconstruction filter "beyond" the pixel level (anyone remember quincunx? Hehe) is useful, but an ordered grid is not enough. I'll say that more forcefully: filters that use purely ordered grids (such as any of the current screen-space AA methods) are not efficient. We've known that one for ages :)
 
It's all so fucking hacky and substandard too, all aimed at antiquated hardware which is irrelevant to me ... you can do better on DX10.1/11.
 
MSAA patterns, make them different for a grid of pixels (32*16 tile?), and use them to supersample colour as well as depth (which is already supersampled), and voila !
Of course this is supersampling, but you have pseudo random distribution and you trade aliasing for noise, which humans are less sensitive too.
An alternative would be to vary the pattern randomly for each pixel to get stochastic sampling. (And store that info.)

I wonder why we don't have that in hardware, it's likely IHV have studied that solution, so if anyone know why it was rejected, I'm all ears...
 
Should be just as easy to just vary them per pixel.

(I'm assuming subpixel U/V/Z are calculated by individual point/triangle intersection tests, and for those it doesn't really matter if the locations are fixed per pixel, per block or per frame.)
 
Last edited by a moderator:
Back
Top