Alternative AA methods and their comparison with traditional MSAA*

Sorry, I was on my cell phone before and didn't get to examine any of the pictures in detail. After doing so I'm now almost certain this is some kind of morphological AA.

Let me illustrate with a screenshot:
morphAA.jpg


The green areas are some spots where you can see the AA working perfecty. As someone said earlier in the thread, for these solid colors it looks almost like a line drawing algorithm -- and that's because that's really what it is. Some paper I read even showed that it's closer to perfect coverage than 16 (sparse) sample AA. When it works it's just beautiful.

And the red part shows where I'd expect a morphological post-process to break down completely -- and it does!

Anyway, I really like this (if it is MLAA and I haven't just misidentified it). It's a nice technique in general (that's why I've been dabbling in it on and off for a long time), and the SPUs are almost perfect for it -- you have a lot of calculations on comparatively little data.


homerdog said:
MLAA looks cool, but wouldn't it miss out on small or high frequency stuff?
If by "high-frequency" you mean "< 1 pixel", then yes, it utterly fails. It's imaginable to reconstruct edges algorithmically up to some point, particularly in video, but that's a decidedly off-line idea in terms of processing effort.
 
MLAA looks cool, but wouldn't it miss out on small or high frequency stuff?

Could it be used in conjunction with MSAA?
If you do it before MSAA resolve, why not.
Easiest way would be by considering each sample position as it's own image and combine in the end.
 
Perhaps the most effect solution would be to render <1 pixel wide objects through a line drawing algorithm instead? Combined with MLAA, it should give a nigh perfect AA method. Should be a hardware in future!

Incidentally, I'm a bit surprised that it's taken this long for this sort of approach to make an appearance. Back in the Amiga days and on 2D bitmap graphics, I'd antialiase edges with rough approximations of sample blends. I guess this sort of field is where pure software renderers could be highly effective. Even if the throughput of quad based renderers can't be reached, the efficiency of only processing necessary pixels/samples could well make up for it, and perhaps with better results.
 
I've also been interested in MLAA (but didn't know it had a name) for some time, but never took it any further than thinking about algorithms and applying it by hand to see how it would look. I never thought I would actually see it in a game! Looks awesome!


If you do it before MSAA resolve, why not.
Easiest way would be by considering each sample position as it's own image and combine in the end.

That would be the easiest, but if you considered the samples to be of the same image you could get AA on those sub pixel lines too, that would be even more awesome :)
 
First you find edges, then do pixel counting on those areas and basically fit wu lines to each pixel row and column and you get the needed blending information.
If I understood the idea of MLAA correctly you do not get correct sub-pixel accuracy for the blending, but still you have full gradients to play with.
It would be more accurate to say that it approximates the subpixel location. It's just one approach though ... as I said, I would try to do it with moments :)
 
And the red part shows where I'd expect a morphological post-process to break down completely -- and it does!
Once stuff goes too far subpixel everything breaks down though, MSAA might have some breathing room because of the extra subpixel samples ... but that too runs out.

I imagine a bigger problem is when an edge crosses a high frequency texture (will only be apparent in motion).
 
Hi,

Is this technique the same as Naughty Dog used for Uncharted 2?
It would be great for PS3 users if it is a widely applicable technique.

Oninotsume
 
Once stuff goes too far subpixel everything breaks down though, MSAA might have some breathing room because of the extra subpixel samples ... but that too runs out.

I imagine a bigger problem is when an edge crosses a high frequency texture (will only be apparent in motion).
One might use information from other buffers to find edges as well, I would guess normal, z, material or object ID buffer would give you nice and clean edges to count. ;)
 
Hmm? I don't mean finding the edge. The problem is that you are using the wrong pixel to blend using the assumption that the top/bottom surfaces are smoothly textured.

Lets say an edge is moving over a black-white checkerboard with pixel sized squares. When the top surface occupies more than 50% of a pixel it will be blended with the neighbour of the pixel under the edge, lets say it's black. Now the surface moves slightly and the top surface occupies slightly less than 50% ... all of a sudden it's going to blend with the actual underlying pixel which is which is white. So a very small movement which should only change the colour of the blended pixel very slightly all of a sudden changes the colour completely.
 
I guess Jokers suggested solution where you sample from a slightly "blurred" image buffer would help in the case MfA describes.
 
Once stuff goes too far subpixel everything breaks down though, MSAA might have some breathing room because of the extra subpixel samples ... but that too runs out.
Well, yes, but with MSAA you basically get similar quality regardless of the size of the edge. With MLAA you get something like 32xMSAA on most edges, and suddenly drop to the equivalent of 0xMSAA when you cross the threshold. I think that's more deserving of the term "break down".

I imagine a bigger problem is when an edge crosses a high frequency texture (will only be apparent in motion).
You can design some textures and situations that produce horrible artifacts in motion, but I don't think it would be a huge problem in practise. If it is, you could try ideas like sampling from a filtered buffer.

I wonder if a GPU vendor could implement some form of MLAA as a post-process in the driver. Would be interesting to see, and on something like Fermi it should perform well.
 
Hypothetically, couldn't you use MLAA *AND* MSAA in a next gen, Cell equipped system for the benefits of both to IQ? Depending on what kind of SPE hit you're taking and what the actual costs are, this would seem to be an interesting way to really tamp down on jaggies.
 
ATI already did with edge detect.

Is it? I've never looked at that in detail, but IIRC it's more of a MSAA sample selection method, and has a correspondingly high performance hit. With "some form of MLAA" I really mean algorithms that (at least implicitly) reconstruct the edge lines.
 
Well, from here:
Zeenbor said:
It's like any other image-space AA filter out there, except it works off the luminance of the color buffer instead of the depth buffer to generate edges. The SPU version has more passes to generate better edge masks. Don't know if I can go in more detail than that.
I was actually curious if they were working from the color buffer since the UI elements are clearly filtered as well (as compared to the 360 shots, just look at the 1 for example). I didn't think that it would make sense to work exclusively from the image if you have Z available, but apparently it does. (I always work from luminance since in the target application I have in mind Z is not available, but I often thought how much more exact it could be if I had the depth buffer as well... guess I was wrong)
 
Last edited by a moderator:
Well, from here:

I was actually curious if they were working from the color buffer since the UI elements are clearly filtered as well (as compared to the 360 shots, just look at the 1 for example). I didn't think that it would make sense to work exclusively from the image if you have Z available, but apparently it does. (I always work from luminance since in the target application I have in mind Z is not available, but I often thought how much more exact it could be if I had the depth buffer as well... guess I was wrong)

It finds edges purely from luminance? Hmmm. Well they definitely have downsized luminance buffers available already, I think most games have them in some for or another now lying around for various uses. I'm just not sure how that wouldn't miss some edges. You'd think there would be cases where adjacent stair stepped pixels at the same angle to the light would have different color but similar luminance, and their method would miss those. Unless they are using both luminance and Z together to find edges? That sounds more do-able since both of those buffers would be really small anyways so sending them back to spu for processing wouldn't be a big bandwidth deal.
 
What do you mean by "downsized" buffers exactly? I'd usually understand that as "downscaled", but that wouldn't work in this context.
 
What do you mean by "downsized" buffers exactly? I'd usually understand that as "downscaled", but that wouldn't work in this context.

Yeah downscaled, like a 1/4 sized luminance buffer. I think they could get away with using one in this case, for purposes of an approximated edge blur.
 
Back
Top