Alternative AA methods and their comparison with traditional MSAA*

Wow, this looks pretty good actually. Even though it's actually killing detail (it's a blur vs. taking more samples), the effect is still better then all the aliasing.
 
Is the blur a necessary evil with this technique? It seems like it will be more apparent on certain titles more than others (though not nearly as bad as quincunx).
 
I did an experimental implementation of MLAA in Frostbite 2 a few weeks ago just because I wanted to see how it looks like on moving pictures.

On still pictures it looks amazing but on moving pictures it is more difficult as it is still just a post-process. So you get things like pixel popping when an anti-aliased line moves one pixel to the side instead of smoothly moving on a sub-pixel basis. Another artifact, which was one of the most annoying is that aliasing on small-scale objects like alpha-tested fences can't (of course) be solved by this algorithm and quite often turns out to look worse as instead of getting small pixel-sized aliasing you get the same, but blurry and larger, aliasing which is often even more visible.

Also had some issues with the filter picking up horizontal and vertical lines in quite smoothly varying textures and anti-aliased that. Again when the picture is moving this change is amplified and looks a lot worse than relying on just the texture filtering. Though I think for this case one can tweak the threshold more to skip most of those areas.

I still think the technique has promise, esp. in games that don't have as much small-scale aliasing sources. But the importance of MSAA remains as you really want sub-pixel rendering to stably improve on small-scale aliasing in moving pictures.

Or some kind of temporal coherence in the MLAA filter, which could be an interesting topic for future research. In games we aren't primarily interested in perfect anti-aliasing of still pictures, but flicker-free rendering of pictures in motion.
 
Is the blur a necessary evil with this technique? It seems like it will be more apparent on certain titles more than others (though not nearly as bad as quincunx).

Doing the Right Thing would require more samples, and some way to determine where and how to take them, necessitating another round of rendering parts of the entire image, with all lighting, shadow and other stuff. In short, an adaptive anti-aliasing technique.

Maybe there's a way to enhance the current GPU pipeline to support this, but I suspect it'd require hardware changes. Right now your options are to either render 1 sample per fragment or a fixed number of samples per segment. Adaptive AA would require changing this on a per fragment basis, and also a method to find the edges on an incomplete image.

Offline rendering supports this for a long while now, but it has complete programmability...
 
I did an experimental implementation of MLAA in Frostbite 2 a few weeks ago just because I wanted to see how it looks like on moving pictures.

On still pictures it looks amazing but on moving pictures it is more difficult as it is still just a post-process. So you get things like pixel popping when an anti-aliased line moves one pixel to the side instead of smoothly moving on a sub-pixel basis. Another artifact, which was one of the most annoying is that aliasing on small-scale objects like alpha-tested fences can't (of course) be solved by this algorithm and quite often turns out to look worse as instead of getting small pixel-sized aliasing you get the same, but blurry and larger, aliasing which is often even more visible.

Also had some issues with the filter picking up horizontal and vertical lines in quite smoothly varying textures and anti-aliased that. Again when the picture is moving this change is amplified and looks a lot worse than relying on just the texture filtering. Though I think for this case one can tweak the threshold more to skip most of those areas.

I still think the technique has promise, esp. in games that don't have as much small-scale aliasing sources. But the importance of MSAA remains as you really want sub-pixel rendering to stably improve on small-scale aliasing in moving pictures.

Or some kind of temporal coherence in the MLAA filter, which could be an interesting topic for future research. In games we aren't primarily interested in perfect anti-aliasing of still pictures, but flicker-free rendering of pictures in motion.

This was exactly what I was afraid of with regards to MLAA when it started being discussed, that the still image (which to my eyes was still aliased more than I'd like) would be much worse when in motion.

BTW - this isn't in regards to Saboteur specifically, just in general.

It'll be interesting to see other approaches using a hybrid/blended/altered MLAA approach.

Regards,
SB
 
I've just done v-sync clean-up on my Bad Company 2 video (so I can decimate it down from 60Hz to 30 without it looking rubbish) and ran it through the Intel filter (is it really accurate to call it MLAA?). I'll upload the filtered version along with an A/B comparison version later. I'm both impressed and disappointed in equal measure!
 
I'm not noticing so much of a loss as texture detail as a color shift to the darker end of the spectrum on some processed parts. However for some reasom the slightly darker color seems to look more natural in most scenes.
 
I've just done v-sync clean-up on my Bad Company 2 video (so I can decimate it down from 60Hz to 30 without it looking rubbish) and ran it through the Intel filter (is it really accurate to call it MLAA?). I'll upload the filtered version along with an A/B comparison version later. I'm both impressed and disappointed in equal measure!

Since MLAA isn't whats being used in Saboteur why is this line still being followed? Is it possible to emulate the AA in Saboteur in other existing games as was done with the intel method?
 
I did an experimental implementation of MLAA in Frostbite 2 a few weeks ago just because I wanted to see how it looks like on moving pictures.

On still pictures it looks amazing but on moving pictures it is more difficult as it is still just a post-process. So you get things like pixel popping when an anti-aliased line moves one pixel to the side instead of smoothly moving on a sub-pixel basis. Another artifact, which was one of the most annoying is that aliasing on small-scale objects like alpha-tested fences can't (of course) be solved by this algorithm and quite often turns out to look worse as instead of getting small pixel-sized aliasing you get the same, but blurry and larger, aliasing which is often even more visible.

Also had some issues with the filter picking up horizontal and vertical lines in quite smoothly varying textures and anti-aliased that. Again when the picture is moving this change is amplified and looks a lot worse than relying on just the texture filtering. Though I think for this case one can tweak the threshold more to skip most of those areas.

I still think the technique has promise, esp. in games that don't have as much small-scale aliasing sources. But the importance of MSAA remains as you really want sub-pixel rendering to stably improve on small-scale aliasing in moving pictures.

Or some kind of temporal coherence in the MLAA filter, which could be an interesting topic for future research. In games we aren't primarily interested in perfect anti-aliasing of still pictures, but flicker-free rendering of pictures in motion.

I'm sorry to hear that, because seriously, your game on ps3 is a jaggfest. Really is. Pictures don't show how bad. Playing it is the real killer.


For me, borderline playable.


I'm not throwing guilt, I realize the hardware just cant handle more. But I would seriously work the engine to make it work. Some blur is always better than a flickering jaggied image.
 
I'm sorry to hear that, because seriously, your game on ps3 is a jaggfest. Really is. Pictures don't show how bad. Playing it is the real killer.


For me, borderline playable.


I'm not throwing guilt, I realize the hardware just cant handle more. But I would seriously work the engine to make it work. Some blur is always better than a flickering jaggied image.

I would have made a comment along that line but it wasn't so bad in action. I had fun with the beta without aa but it is true that the effort should be made to smooth those edges. When you get up close to the screen its pretty bad. Will the 360 version include AA? I know the previous game 1943 had no aa on either system so has this been addressed?

My question is: it's better a MLAA with its flaws or not AA ? Just to know, I can't see the videos.

I am hoping for some form of highly selective aa. A little aa is better than nothing when its on the most noticeable aspects of the game. Would be great if devs experimented with differing levels of these SPU methods. If it messes up the image, tone it down kind of thing
 
Last edited by a moderator:
Saboteur 360 compared with Intel-processed image and PS3 equivalent shot:

Shot 1: 360/ Intel-Processed/ PS3
Shot 2: 360/ Intel-Processed/ PS3
Shot 3: 360/ Intel-Processed/ PS3
Shot 4: 360/ Intel-Processed/ PS3
Shot 5: 360/ Intel-Processed/ PS3
Shot 6: 360/ Intel-Processed/ PS3
Shot 7: 360/ Intel-Processed/ PS3

I'd say the luminance-based approach works pretty well bearing in mind it's runtime versus what I assume is a proof of concept bit of source. Interesting artefacts on Shot 7 though on the red edge of the silver car. Also note the artefacting on the HUD text in shot 4 on both processed 360 and the PS3 shot.
For what it's worth, Pandemic's implementation of MLAA on the PS3 seems much sharper than the Intel-based screenshots. Not nearly as much impact on the textures and overall clarity. This is most evident on comparison shots 2 and 5.
 
I did an experimental implementation of MLAA in Frostbite 2 a few weeks ago just because I wanted to see how it looks like on moving pictures.

On still pictures it looks amazing but on moving pictures it is more difficult as it is still just a post-process. So you get things like pixel popping when an anti-aliased line moves one pixel to the side instead of smoothly moving on a sub-pixel basis. Another artifact, which was one of the most annoying is that aliasing on small-scale objects like alpha-tested fences can't (of course) be solved by this algorithm and quite often turns out to look worse as instead of getting small pixel-sized aliasing you get the same, but blurry and larger, aliasing which is often even more visible.
Thanks a lot for providing your developer input, repi.

Would a possible or feasible solution to some of these artifacts perhaps be to turn off the MLAA process altogether when there's a movement/rotation of the camera going on (maybe at or above a specified speed), thereby avoiding the appearance of such artifacting with motion on the screen? After all, one miight argue that aliasing is a much bigger problem when the screen is sitting still than when there's movement or camera motion, in which case the perceived severity of jagged edges are not that of a problem at all, to the naked eye (well, at least beyond a certain speed of movement).

Could another solution to the second problem you mentioned of exacerbating aliasing on alpha surfaces be to turn off the effect on those surfaces altogether as well?

I really would appreciate some more specifics on the differences in implementation between The Saboteur's post-process AA method, and the experimental MLAA method Intel prescribes.
 
Last edited by a moderator:
Would a possible or feasible solution to some of these artifacts perhaps be to turn off the MLAA process altogether when there's a movement/rotation of the camera going on (maybe at or above a specified speed), thereby avoiding the appearance of such artifacting with motion on the screen? After all, one miight argue that aliasing is a much bigger problem when the screen is sitting still than when there's movement or camera motion, in which case the perceived severity of jagged edges are not that of a problem at all, to the naked eye (well, at least beyond a certain speed of movement).

Unless stuff is moving really fast, aliasing is even more distracting in motion. Crawling jaggies ftl.
icon13.gif


After watching grandmaster's comparison I say this technique has a long way to go. Really breaks down in motion, and does next to nothing on distant/small (<1 pixel) stuff. Guess it could be used to make nicer bullshots though :cry:

Perhaps the algorithm they use in Saboteur is better?
 
Last edited by a moderator:
Back
Top