GPAA demo (Geometric Post-processing Anti-Aliasing)

Humus

Crazy coder
Veteran
I just uploaded a demo of GPAA, a post-processing antialiasing technique that is using the actual geometric information to blend pixels with neighbors along geometric edges. The best case is actually near horizontal or near vertical edges, with the worst case being diagonal lines, although those are handled pretty well too. :)

Comparison with and without GPAA:
GPAA_compare.jpg


Downloadable from my website.
Toggle GPAA with F5.
 
Interesting Humus how much more costly would be a 10 bit implementation? Obviously using 7 bits for the coverage is a bit overkill but I was thinking 3 bits for the neighbor and at least 6 bits for coverage while not expecting something along the lines of a 9 bit implementation.
 
Looks great. Seems to have the worst edge quality (still very good) at somewhere areound 10-20deg angle. Looks perfectly smooth once you get very far away from there!!! So I expect this will be in the next Just Cause ? :D
 
Last edited by a moderator:
Very nice. I suspect the buffer copy isn't going to be an issue for the vast majority of games.

Naturally I'd wonder how fast it is with a more complex scene. GPUs typically aren't too hot at drawing lines.

You should also look into applying this technique to shadow post processing. This was similar to an experiment I'd done a long time ago - however I didn't preprocess the model the way you are (cpu silhouette detection instead), and I found I was limited by DX9's capabilities.

My concern is it'll start to fall over once you get very small triangles. Certainly subpixel triangles would be an interesting case :mrgreen:
 
Very nice, how much performance cost?

What I measured on my HD 5870 was 0.08ms for copying the backbuffer and 0.01ms for the edge smoothing at 1280x720, so it's indeed very cheap. But it is also dependent on the scene geometry complexity, so more advanced scenes would be more costly.

Interesting Humus how much more costly would be a 10 bit implementation?

Well, it would be 2bits more costly. ;) To be honest, I don't even know how well suited this technique would be for a hardware implementation, although I find the idea attractive. I'm sure there are corner cases I haven't thought about (thin triangles, corner pixels etc).
 
do near horizontal or near vertical edges need aa ?
I hope this isn't ironically meant rhetorical question... In the case it is, ignore my reply :) Yes, near vertical/horizontal edges need AA mostly:

edgesennk.png


Current rotated-grid / sparse sampling methods, which reach maximum anti-aliasing potential on near vertical/horizontal angles, are so popular because of that.
 
GTX 580, 1920x1200. GPAA On/Off: 513/552 fps.

Interesting approach but like Humus said, competitors like SRAA and MLAA are much less sensitive to scene complexity.
 
I tried this demo a few times, and I cannot spot the differences with GPAA on or off. MSAA is not being forced on by my video card, and it's not enabled in the options.
 
I was thinking, is it possible to detect which pixels are on the edge without the second line pass?
One idea I can think of is to put the positions of the vertices available to the pixel shader, then compare the pixels' positions against the edges. If it's within a certain margin (such as, maybe less than one pixel or something), consider it's to be an "edge pixel." I am not sure about the cost of this (compared to a second edge pass) though. But at least there would be no need to send the geometry twice.
 
I tried this demo a few times, and I cannot spot the differences with GPAA on or off. MSAA is not being forced on by my video card, and it's not enabled in the options.

Does edges look smooth or jagged?

I was thinking, is it possible to detect which pixels are on the edge without the second line pass?
One idea I can think of is to put the positions of the vertices available to the pixel shader, then compare the pixels' positions against the edges. If it's within a certain margin (such as, maybe less than one pixel or something), consider it's to be an "edge pixel." I am not sure about the cost of this (compared to a second edge pass) though. But at least there would be no need to send the geometry twice.

Absolutely, that's next on my list of things to try. If the final geometry pass can be eliminated then that's certainly a good thing. :)
 
Everything looked really smooth and ran fast on my friend's laptop which has some pathetic nvidia dx10 chip.

The scene was extremely simple; how would this technique fare in something like Crysis or JC2? Often it seems those games that need AA the most are those that least benefit from post-process AA, or MSAA for that matter.
 
I was thinking, is it possible to detect which pixels are on the edge without the second line pass?
One idea I can think of is to put the positions of the vertices available to the pixel shader, then compare the pixels' positions against the edges. If it's within a certain margin (such as, maybe less than one pixel or something), consider it's to be an "edge pixel." I am not sure about the cost of this (compared to a second edge pass) though. But at least there would be no need to send the geometry twice.

I've tried this in the past in DX9... It didn't work because the interpolators always give you the value at the centre of the pixel unless you are drawing lines - hence it works so easily here. I suspect DX10/11 could manage it though

[edit] Actually I think you mean something else... never mind
 
Does edges look smooth or jagged?



Absolutely, that's next on my list of things to try. If the final geometry pass can be eliminated then that's certainly a good thing. :)

Edges look smooth no matter the setting. I've checked and double checked my control panel settings. Running a Geforce GTS 250 with driver 266.35. Vista x64(Though, I doubt that matters...).
 
It turns out there is a patent on pretty much exactly this idea from 1996, which the author of it pointed out to me in the comments on my website:

http://www.freepatentsonline.com/6005580.pdf

Does anyone know what I should make out of this? How would this affect my demo? Would anyone using this technique need to obtain a license or something?

I have other ideas based on similar principles which I'd like to explore, how different would a technique have to be to not clash with any patent? I notice for instance that there's another patent which approaches the problem in a similar fashion, but just does things a little bit different (shade both sides instead of filtering):

http://www.freepatentsonline.com/7286138.html

And what about this:
http://www.faqs.org/patents/app/20080252659
 
I think patents are valid for 20 years and it doesn't matter if the inventor doesn't sound hostile since he doesn't own the patent. It's assigned to Micron. I don't know why they were interested in AA back then. Did they ever make graphics chips?
 
I think patents are valid for 20 years and it doesn't matter if the inventor doesn't sound hostile since he doesn't own the patent. It's assigned to Micron. I don't know why they were interested in AA back then. Did they ever make graphics chips?

Rendition perhaps? They were acquired by Micron IIRC, and were a bunch of clever chappies.
 
Back
Top