Alternative AA methods and their comparison with traditional MSAA*

Unlike MSAA, a screenspace solution can help with not only edge aliasing but also, shader/tone mapping and texture aliasing.
 
Very nice shots Nebula. I don't mind the 'breakdown' in high contrast areas. The images also show scenarios where MLAA may/will have problems.

In a game where these conditions don't exist (e.g. GoW3 has no power lines. Kratos and SackBoys are bald), or when the devs treat the image with other effects, I think MLAA images (e.g. colorful objects in focus) are still cleaner IMHO.
 
Unlike MSAA, a screenspace solution can help with not only edge aliasing but also, shader/tone mapping and texture aliasing.
At least in GoW3 it seemed to do some wonders with sharp shadow edges. :)

What I would love to see is MLAA with subpixel information.
This would help with near vertical/horizontal edges and would make gradual movement possible.
This is going to sound like a noob question but is 2x temporal AA comparable to 2xMSAA when it is on?
In still image it's pretty much a 2xSSAA as the sample location is shifted as well.
On moving image it is 1xAA.
 
What I would love to see is MLAA with subpixel information.
I don't think current MLAA implementations could handle the rotated sample grid of 2xMSAA. Going straight to 2x2OGMSAA as a base is probably too much of a step up right now. And it's not just the expense of the MSAA rendering, but processing 4 times the amount of data in the MLAA block as well.
 
There are too much opinion give for sured about what MLAA can't do here, but I want to remember we are only at the begin... I have seen different situations in GOW 3 with MLAA worst scenario & only in motion showing some bad effect but I'll be waiting to see more future application to talking of MLAA weakness ... for now it's an excellent AA & the best alternative to MSAA. I really appreciate an alternative way to the classic (& too expensive) & the compromise there are even with MSAA.
 
Last edited by a moderator:
I don't think current MLAA implementations could handle the rotated sample grid of 2xMSAA. Going straight to 2x2OGMSAA as a base is probably too much of a step up right now. And it's not just the expense of the MSAA rendering, but processing 4 times the amount of data in the MLAA block as well.
I'm not implying smaller than pixel edges, but edges with information about their location like in silhuette shadow maps.
 
I was actually going through the Alan Wake forums and found this thread that has been dead since the whole "Whats Alan Wake's real resolution" fiasco. It was then brought back up to two new posts.

http://forum.alanwake.com/showpost.php?p=108301&postcount=18

http://www.iryokufx.com/mlaa/ (Don't know if this is old to you guys)

And here are some posts from one of the devs at Remedy:

http://forum.alanwake.com/showpost.php?p=73828&postcount=15

http://forum.alanwake.com/showpost.php?p=108388&postcount=19

And the original thread itself:

http://forum.alanwake.com/showthread.php?t=3389
 
Really interesting. Someone has info how much 2xMSAA and 4xMSAA costs on Xbox 360?
5ms its quite expensive, but 4xMSAA should be similar and benefits could be huge.
 
A co author posted a comment on the remedy forum. They tested the algorithm in XNA.

MLAA is applied over screenshots from those games since we only need the final image when working based on luminance. Maybe for the example of MW2 numbers are a little high, but the Xbox version is not build with a proper Dev Kit, just public XNA environment, so we think performance improvements could be achieved with better tools.

PS: I'm co-author of this work
smile.gif
 
A co author posted a comment on the remedy forum. They tested the algorithm in XNA.
Great stuff!:D
I wonder if 3-4ms is really that expensive(even though they said they can achieve better time) since cost of 2xMSAA on 360 should not be free.On that i mean about vertices hit.
 
Last edited by a moderator:
Screenshot is nothing. Nebula check a movie, its really impressive. There is shimmering but not much stronger than on 8xMSAA, its sometimes better and sometimes worse in matter of AA, but really comparable - really impressive tech. Now we need ATI/Nvidia to implement it to drivers, it shouldnt be hard, because its only post-process effect.

BTW when those guys release demo we'll can easily check hybrid systems like 4xMSAA + MLAA [with drivers overdrive], MLAA + TSAA and so on :)
 
Last edited by a moderator:
Heh I just want them to make a way I can use it in anything to play around with things that lack proper AA or forcing AA is known to break things/have no effect.
 
Interesting and the screenshot with MLAA looks very clean. Perfomance seems reasonable on 360 GPU and childs play for the 9800GTX which is 2006 year tech (8800GTX/Ultra<->9800GTX+).

Yap, they also answered my earlier question: Why they need pre-computed textures. :cool:

The technique is an evolution of the work "Morphological Antialiasing", which is designed for the CPU and unable to run in real time. The method presented here departs from the same underlying idea, but was developed to run in a GPU, resulting in a completely different, extremely optimized, implementation. We shift the paradigm to use texture structures instead of lists, which in turn allows to handle all pattern types in a symmetric way, thus avoiding the need to decompose them into simpler ones, as done in previous approaches. In addition, pre-computation of certain values into textures allows for an even faster implementation.

The algorithm detects borders (either using color or depth information) and then finds specific patterns in these. Anti-aliasing is achieved by blending pixels in the borders intelligently, according to the type of pattern they belong to and their position within the pattern. Pre-computed textures and extensive use of hardware bilinear interpolation to smartly fetch multiple values in a single query are some of the key factors for keeping processing times at a minimum.

I wonder if someone has ported this version to Cell.
 
What is the penalty for context switching in a GPU, since you're going from traditional rendering to GPU compute to apply this process in real time?
I'm also stuck on "pre-computed" textures, as I don't understand what that means. Does it mean just extra info stored inside the texture during art creation, or is it something that will have to be done in real time as well?
 
I imagine it means there are a set of textures that come with the AA system that describe the patterns. Basically any data use on a GPU has to be structured as textures, which is the basis of GPGPU. Databases end up being turned into 'textures'. This is just the AA sample pattern datas described as a texture, predetermined and used as a look-up table.
 
Oh so it's just a data structure. I watched the video, and wasn't that impressed. While it filtered the obvious edges well, it also missed a lot. I believe their "texture DB" needs more refinement and more patterns.
 
Back
Top