Skyrim has MSAA + FXAA (to solve transparency antialiasing), but applying post AA filter on top of standard MSAA resolve is just wrong. The AA filter should use the subsamples to improve the edge gradient calculation, not just do two blends on top of each other. If I understand properly SMAA should address this problem?
Combining temporal "subsample" data with post process AA filter is a good idea. But just doing both on top of each other (brute force way) doesn't of course do much good. The algorithm needs to be more intelligent than that. SMAA seems to be the first solution towards that path.
We tried to leverage the subsample information in various ways.
The first thing we tried is applying MLAA on top of a resolved image. Unfortunately, this didn't work. Edge detection fails as the edges have much less contrast after the resolve; furthermore, even if you run the edge detection pass on top of the original unresolved data, the coverage areas used for blending will not produce the correct values (as the pixels are already blended by the 2x resolve). We tried to bias this in various ways, but we came to the conclusion that it's impossible to do it in a general way. Here you have what happens if you do MLAA after resolving:
http://www.flickr.com/photos/70160915@N03/6373855775/
Then, we tried to apply MLAA before resolving. It worked better, but it was still performing worse than MSAA for the same sample count, with regards to subpixel features (see MLAA + 4x):
http://www.flickr.com/photos/70160915@N03/6373890101/
This is because of two main reasons. The first one is that MLAA tends to round everything at subpixel level. This means that MSAA or SSAA won't be able to correctly resolve into the proper values. The second reason is that MLAA assumes the revectorization to be centered in the pixel (MLAA or any other postprocessing AA). However, when supersampling or multisampling MLAA, this revectorization must be moved to the center of the sampling pattern:
http://www.flickr.com/photos/70160915@N03/6373856415/
The problem is that if we don't do this, areas will be over/understimated, causing objects to glow. Furthermore, applying MLAA with MSAA will never converge to MSAA despite using really high sample counts.
Other option we tried is running MLAA at subpixel level; however, postprocessing antialiasing does not work well with rotated grid patterns, which rules out any MSAA approach.
On an off note, I'd always wondered if anyone's tried some form of mlaa on shadow maps. Maybe not too useful as MSAA'd shadow maps are a lot cheaper than the full color sample and etc. that you need for your frame, but it just sounds like an interesting experiment. I bring it up mainly because of Skyrim's horrible aliased shadows, not even changing them to x4096 myself has truly solved the problem.
It could be used for rendering antialiased hard shadows if used in tandem with VSM or ESM (as regular shadow mapping cannot be prefiltered). For soft shadow mapping I think it would be less useful because the blurring gets rid of aliasing anyway.
It's true that motion blur helps in these cases, but it's not a perfect solution. Furthermore there are still valid small motion scenarios that break under reprojection caching like high frequency foreground geometry (blinds, foliage, etc) that occludes very different fragments each frame even if the camera/objects are moving relatively slowly.
Agreed; actually, it's easy to find these scenarios by playing with our demo. We think that mixing MLAA, MSAA and temporal SSAA into a single technique is a good approach because it offers fallbacks in these kind of failure scenarios.
Yes, but with a certain console, you cannot directly use rendertarget as a texture. You have to resolve the contents first to main memory. 720p full screen resolves are not cheap. A few of them, and you already have 1 ms spent.
This is actually a problem of our MLAA approach on that console. Half of the time is spent on resolves. We've various ideas to improve things on consoles, we hope to get a devkit soon to try them!
With FXAA you of course have to do a separate edge detect pass, if you want to use the stencil culling. But many engines have multiple existing full screen passes (tiled deferred lighting for example) that can easily be modified to do the edge detection in addition to their usual work. For example if you integrate the edge detection to the full screen (tiled) lighting pass, you already have the depth sampled (for position calculation), and only need to sample the neighbors' depth (that are fortunately in texture cache). So the edge detect become really inexpensive.
We like doing the edge detection step on the motion blur pass, because this way you can skip pixels that will receive a lot of motion blur. And also because the neighbors will probably be in the texture cache (as you mentioned), it will add little overhead.
Regarding Assasins Creed Revelations AA, it looks really nice indeed!