Anti-Aliasing for deferred shading

zhugel_007

Newcomer
Hi,

What is the best AA method for Deferred Shading on DX9 level hardware?

The edge blur method doesn't look good because we are loosing the high frequency information of the image. The MLAA is not quite GPU friendly. :(

I am curious if there is any better (quality&performance) method on DX9 level hardware.

Thanks in advance!
 
Depending on your usage of deferred rendering, Light indexed deferred rendering may do what you want with AA:
http://code.google.com/p/lightindexed-deferredrender/

(the demo seems to work OK with AA - uses DX9 level OpenGL)

If you read the paper, the CPU sorting of light is probably more practical than the GPU sorting (as used in the demo)

The big downsides is the number of lights you want to support per pixel. (>4 is problematic) and having different kinds of light types.
 
Perhaps you need to tune the parameters to your AA shader? I've implemented deferred anti-aliasing on a console, using something similar to this (although not their actual shader code):

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html

It only blurs edges, leaving the interiors of objects untouched. The parameters to the algorithm need to be adjusted -- if adjusted wrong, you'll see screen wide blurring. We use 4 adjacent depth samples (the diagonals), rather than 8, for performance reasons, and it works fine.

I have a couple ideas on how to do higher quality anti-aliasing with deferred shading. I'm not sure how practical they are in terms of performance or quality, and I haven't thought through all the details. But maybe someone on this forum will decide to try them out. I don't think the console is powerful enough to do anything more complex, which is why I'm not going to try it myself.

The problem with edge blur anti-aliasing is that it can't eliminate stair step artifacts on nearly vertical or horizontal edges. It turns it into a softer stair step, but it's still very obviously there. Proper MSAA takes into account the actual edge when computing coverage so the sampling blends evenly between the "steps", but we don't have that information. The 8 adjacent pixels aren't enough to reconstruct the edge. We can scan more pixels to try to find stair step transitions , but that's going to be expensive (lots more texture samples, and the shader is already bandwidth bound).

So is there any way we can compute an accurate sub-pixel edge?

The first idea I had was to do something like Shadow Silhouette Maps. Write silhouette edge points to the frame buffer, and use them to generate an edge. I think this would require DX10 hardware to be efficient (so you could generate silhouette edge quads on the GPU rather than CPU). This approach will not anti-alias cases where geometry passes through other geometry.

The second idea I had was to compute a geometric intersection between adjacent planes. You can compute a plane equation for each primitive using the point and normal, then find the 3D line where they intersect, and transform that to screen space. This works well for forward facing creases or pass-through geometry, but can't anti-alias depth discontinuities, since there is no information about the adjacent face. For that, my idea was to render a second normal + depth buffer with back-facing polygons. That gives you a source for the adjacent face on the mesh to compute the edge intersection. This approach doesn't require any DX10 features.

Precision may be an issue, but if you reconstruct the positions using 24-bit or 32-bit depth buffers and 16-bit normals, it should be accurate enough to find the edges. The other issue is corner cases where more than 2 edges meet. I'm sure you could think of something to handle that.

Both of these approaches could produce results comparable to fairly high levels of MSAA, without requiring sampling a much larger range of texels. Both would be much slower than the simple 8-neighbor approach, as each requires an additional full scene pass, (one to generate silhouette edges, the other to generate back-face depth+normal). Although those extra passes would be much cheaper than other shaded or post processing passes (in our console engine, the pass that generates depth+normal is around 3-5% of the frame time, and the AA pass is 3%). The shader itself would require more texture samples, and more ALU, although it's likely to remain bandwidth bound. An entry level PC GPU these days is much faster than a console, so you could afford more complex AA, if it's important to you.

Anyone want to try to demo one of these ideas?
 
Back
Top