Post-processing AA *without* depth information

PeterT

Regular
I recently switched from playing my last-gen consoles (i.e. ones that output mostly 640*480 without AA, so Wii as well) on a old CRT to dscaler'ing them to a LCD. As aliasing in 640*480 images upscaled to 1080p really hurts my eyes, I searched for some filtering algorithms that would use the awesome processing power of modern PCs to improve the picture. Surprisingly, it seems like no one ever tried something like this (or I missed it ;)).

Now, I have enough of a signal processing background to know that, in this case, where we don't have more information than what's in the image, "enhancing" it is quite an audacious claim. However, I did have a plan: to use known outside information (on the structure of aliasing artifacts) to improve the image while retaining nearly all detail.

So, these were my goals:
- input are RGBA images (and nothing else)
- reduce edge aliasing
- while retaining as much detail as possible
- with an algorithm that can run 640*480 in realtime on modern systems (GPUs, G80)

I thought it should be possible to at least do better than the only current filter I know of, blurring. My idea was a combination of something like a inverse Perona-Malik filter together with morphological recognition of aliasing structures (but only on 3x3 neighborhoods to maintain the realtime requirements). Before implementing a potential realtime version, I first experimented a bit in Matlab: here's what I came up with.

gurumincomp.jpg


From left to right: Original image, filtered, XOR, XOR marked in original - all at 2x magnification. I have a few more test images, none of which are as impressive as this one. However, what I'm happy with is that the algorithm never seems to lose significant detail from the original picture, and only modifies the pixels it determines to be aliased edges. Even the dithering patterns in this shot (from a PSP game) remain untouched. Obviously, such conservative filtering also misses some aliasing, but that is preferrable to me to overblurring textures or details.

Do any of you know of other efforts to implement post-processing AA without additional information? What do you think of my try? Any other ideas? (Please remember that this is only intended for cases where you have no way to change the rendering or access the original data. Otherwise doing "real" AA is an order of magnitude more efficient)
 
A lot of effort for things like that has been but into emulators, so maybe looking at the SNES9X and ZSNES source-codes might help, although these techniques might only work for paletted images in some cases.
 
[maven];1039570 said:
A lot of effort for things like that has been but into emulators, so maybe looking at the SNES9X and ZSNES source-codes might help, although these techniques might only work for paletted images in some cases.
Hmm, if you're referring to upscaling filters like 2xSaI, Eagle and HQ3x then they're quite different: They were made to upscale low-res 2D games while my filter tries to post-process 3D renderings. Because of this difference in purpose 2xSaI and the other filters usually look horrible when applied to rendered 3D images -- to them, each edge should have meaning and is interpreted as such, so 3D artifacts like rendering aliasing and line crawling can have devastating effects on the upscaled image, especially in motion.

However, you're right in that there is a similarity -- not in purpose, but in execution: both try to estimate a local vectorization of the image to aid the processing. The difference is in how that information is used and in the patterns the filters are looking for. And also in where exactly those patterns are searched: the emulator scalers do it directly on the image while my filter works on something similar to, but not quite, the magnitude of the gradient.
 
Well, you can always perform supersampling which is edge independent.
Though the performance hit will be severe...
 
Well, you can always perform supersampling which is edge independent.
Though the performance hit will be severe...
You misunderstood the intention of this -- I probably didn't explain it very well. This algorithm is for the case where you only get a 2D image (that you know was rendered without AA) and want to perform AA on that without any further information. So it would be impossible to use any conventional method, including SSAA.

(Also, SSAA if it were possible would most likely incur a smaller performance hit than this method. This needs to calculate, for each pixel, the forward- and backward finite differences, then apply a rather complex morphological operation to that (requiring ~9 reads and quite some math per pixel, and finally smooth the image according to that calculated diffusivity)

Still, thanks for the reply -- this part of the forum seems pretty slow these days.
 
Well, I've done a demo before, and it looks good(pretty like your screenshot, missed some low contrast lines and perfect at some angle) and run fast enough(~3ms, 1280x720 on 7950GT), and mainly the same idea as yours: work just like a photoshop filter.

the idea is just check vertical and horizonal pixels nearby, and test if it needs smoothing.

as I'm working on XBOX360, the quality of the filter is surely not as good as HW FSAA, especially with small objects. I thought it's the last resort if tiling really sucks.:D
 
the idea is just check vertical and horizonal pixels nearby, and test if it needs smoothing.

as I'm working on XBOX360, the quality of the filter is surely not as good as HW FSAA, especially with small objects. I thought it's the last resort if tiling really sucks.:D
The older method you posted about before with the extra geometry pass which checks against the offset from the center of the pixel generally worked pretty well. I did a few modifications to that and eliminated the need for a pre-process on the geometry (to find edge directions and all) -- relied only on the offset from the center of the pixel so that the AA pass would have been a position-only geometry pass... and in the end, it still produces better (visual) results than hardware almost all the time.

Numerically, of course, I can't say it's really better... There are some over-adjusted pixels here and there, and it still misses some pixels where it has little to no effect, but because they're kind of sparse within an edge and there isn't an obvious pattern to them on immediate visual inspection, the impression it gives is that the edge is just plain sharper. The only major deal-breaker for it is the question of whether or not you can afford to send geometry down (no matter how slim) one more time. That's a big one, unfortunately :???: .

I think a purely image-space method that simply checks neighbor differences against a threshold to see if a blend is needed should be expected not to work perfectly as checking immediate neighbors to find an edge may not necessarily be reliable in a small window (e.g. one pixel). Perhaps if you used multiple intervals and decided to smooth based on multiple threshold checks... Maybe have those results as multiple values and do some weighted average between the results so that you still preserve some sharpness if the wider intervals told you you need to smooth but the narrower ones told you otherwise.
 
The older method you posted about before with the extra geometry pass which checks against the offset from the center of the pixel generally worked pretty well. I did a few modifications to that and eliminated the need for a pre-process on the geometry (to find edge directions and all) -- relied only on the offset from the center of the pixel so that the AA pass would have been a position-only geometry pass... and in the end, it still produces better (visual) results than hardware almost all the time.

Grad to see someone improved the method:p
Yes the original post is not optimized well; I remember that the direction check is completely useless and the shader could be much simpler. but you know, it's not really a new idea and I don't want to dig the thread out :)

Numerically, of course, I can't say it's really better... There are some over-adjusted pixels here and there, and it still misses some pixels where it has little to no effect, but because they're kind of sparse within an edge and there isn't an obvious pattern to them on immediate visual inspection, the impression it gives is that the edge is just plain sharper. The only major deal-breaker for it is the question of whether or not you can afford to send geometry down (no matter how slim) one more time. That's a big one, unfortunately :???: .

When a title is using millions of vertices per frame and the vertex fetch is a bottleneck, I should say I can't afford it. that's why I like pure postprocess.

I think a purely image-space method that simply checks neighbor differences against a threshold to see if a blend is needed should be expected not to work perfectly as checking immediate neighbors to find an edge may not necessarily be reliable in a small window (e.g. one pixel). Perhaps if you used multiple intervals and decided to smooth based on multiple threshold checks... Maybe have those results as multiple values and do some weighted average between the results so that you still preserve some sharpness if the wider intervals told you you need to smooth but the narrower ones told you otherwise.

Ture, not perfect because you don't know the actual geometry but just guess somewhere may be an edge. I hate thresholds, if I can use a smooth function I will.
 
I think a purely image-space method that simply checks neighbor differences against a threshold to see if a blend is needed should be expected not to work perfectly as checking immediate neighbors to find an edge may not necessarily be reliable in a small window (e.g. one pixel).
Absolutely. What my method pictured in the first post does is to first calculate an image containing abs(du/dx * du/dy) for each pixel and component and then look at 3x3 windows in that to determine if, and by how much, a pixel should be smoothed. I originally planned on using a windows and even more patterns, and that would have been more exact of course, but I no longer believe that's feasible for real-time processing right now.

Having any knowledge of geometry at all would obviously make things quite a bit easier ;)
 
Okay, I just stared at your example for a few minutes:

1) Damn impressive. Damn. Impressive. "AA" is applied to almost every edge in the shot.

2) What does this look like in motion? From your explanation, it sounds like it could make (the "AA"ed) edges of objects stand out temporally. :(
 
2) What does this look like in motion? From your explanation, it sounds like it could make (the "AA"ed) edges of objects stand out temporally. :(
Good question, and one I've asked myself ;). The answer, for now, is that I have no idea. I hope that it should work relatively well in motion. If it doesn't, or if it has hardly any impact on edge crawling (which is what I expect, sadly) I have some vague plans to use the diffusivity information of several frames with some adjustable temporal dampening -- that should work very well for edge crawling on the relatively slow-moving edges where it's most apparent, but cause some detail loss on fast moving scenes (that you hopefully wouldn't see anyway).

If you only have plain image data to work with everything is a trade-off, unfortunately.
 
Looks really nice. Yeah I run a lot of PS1 games on my computer using Pete's Open gl 2.0 and ePsxe. The aliasing annoys me a bit especially since I have the grunt to be able to run some but all the software AA algorithm's made seem to blur a lot. The problem with doing it in engine seems they are only allowed one shading pass.
 
Back
Top