256x Edge Anti-Aliasing demo

RacingPHT

Newcomer
From the last deferred MSAA thread, I'm thinking a method with less memory requirenment, faster rendering, and orthogonal to MRT. It's very difficult to me, fortunately, there may be some progress. This time, the edge-AA does not need any extra framebuffer, nor does it use hardware MSAA unit, and runs faster than hardware 4x MSAA on my system, with acceptable quality. (at least not worse.. :p) but still, it has to do some pre-process. if you have a better way, plz let me know.

AttachFile-336657


this is what I did to get edge-AA work:
1: pre-process the anti-aliasing geomerty, similar to shadow volume's.
2: render the scene, with AA off.
3: StretchRect the back buffer to a texture, use it, with Bilinear filter.
3: render the edge, with the following shader:
VS:
1: check if the edge is on the silhouette, else throw it away.
2: calculate the perpendicular direction of the edge, outwards the triangle.
PS:
1: dont use SM3.0 vPos as it's interger. I need frac() so calculate it.
2: the pixel coverage comes from the frac(vPos).
3: sample the scene using the offset and bilinear filtering.
AttachFile-336669


the demo goes here, play :)
http://gamesir.enorth.com.cn/AttachFile-336674
 
Looks good but please increase the front clip plane! (as mentioned in a previous post!)
This will allow better viewing of edges at close range, which is useful to assess the quality of the implementation.
 
Looks good but please increase the front clip plane! (as mentioned in a previous post!)
This will allow better viewing of edges at close range, which is useful to assess the quality of the implementation.
It doesn't work on clipped edges. But anti-aliasing isn't important there anyway. I find it just as easy to assess the quality the way it is...
 
It certainly has an issue with geometry such as the tip of the lid handle, where several tris meet at a common vertex. And that doesn't just seem to be a rejection problem (i.e. when I comment out your rejection code so that every edge is subject to AA, it doesn't change much at all for that particular case).

I have to think the data coming in at the vertex level has some problems when you've got a case like that. What exactly are you moving through that vertex stream? It's ill-commented, so the presence of a Position0/1, Normal0/1, Tangent, and Blendindices (the last of which seems to just be padding) isn't very telling.
 
There is still slight noticeable shimmering in the edges in AA mode, while moving around.
That's probably gamma correction (or lack thereof).

I still think it looks very nice, and performance is excellent. I'm wondering if we migth ever expect IHVs to use edge-antialising again. I know it requires connectivity information to determine edges, but this might be easier with DirectX 10.
 
Well, with D3D-10 spec's describing almost the entire pipeline being programmable (incl. the ROP functions) that would be a piece of cake. Increased general computation power vs. fixed-function should take us away from the long time hardware AA nighmare. :D
 
I did a similar thing a while ago, but concluded that the CPU overhead wasn't worthwhile and would rather wait for geometry shaders. However, I did this under the assumption vPos was the centre of the rendered pixel, is this not the case? (hard finding doc on these things)
 
SuperCow: maybe in next version :)

ShootMyMonkey: the pre-process dose not handle the place well. It may be fixed.

Nick, fellix: I'v not used gamma correction this version. It's essential to smooth AA. I'm also expecting DX10 GS will make it easier :)

Graham: the CPU overhead is almost none. how did you do that? vPos I think is the sample point, which is (0.5, 0.5).
 
ShootMyMonkey:
the vertex stream is a D3DPT_LINELIST, which contains :
self position, another end point's position, two face normal(if the edge only connect with one face, the Normal1 is -Normal0). the faceSide is outside direction of first face, which is coplanarity to it. it's useful for single face-edge. and the Dummy is 0 if the point is the edge's start point, and 1 otherwise(mainly for debug).

it's not commented well, and it still needs improvement.
 
I was working on an implementation that did not require pre-processed geometry, or checking if the edge was on a silhouette. However the trouble comes in determining how much of a given pixel is covered, the rest is pretty straight forward.
 
Well, in theory, the "simple" approach is not to care about whether something is on the silhouette or not and just redraw all the geometry performing your interpolated sampling from the render target and use the same method you have in place to determine how "off" the projected position is from the center of the pixel. I also noticed little to no difference in the results of RacingPHT's AA shader if I disabled the silhouette check (except of course in wireframe mode where disabling the check means every line is affected).

Of course, the universal problem is having to move all the polygons through twice. And it's not all that uncommon in a practical game engine app to anyway have moved the geometry through once before for a Z-Prepass (though the subsequent passes move less geometry). Though I guess if you're already doing multipass lighting, it doesn't hurt much.
 
Can someone post a low res shot of what it looks like when the geometry is very far away (close to pixel sized triangles)?

Cheers :)
 
Can someone post a low res shot of what it looks like when the geometry is very far away (close to pixel sized triangles)?
Well... We can't really do that. His far clip plane isn't far enough away for that -- the geometry gets clipped away long before it gets that small. Maybe if someone ran it at an 80x50 screen res?:p

Only PHT who has the code could modify that.

the faceSide is outside direction of first face, which is coplanarity to it. it's useful for single face-edge.
Curious. Is there a major purpose behind using something special to get a direction vector like that in general? The most common case I can think of for geometry that has single-face edges would have those edges alpha-ed away (or so far away and obscured by other stuff that you would hardly notice it). For convex geometry, it seems the normal alone would be good enough. And even otherwise, I would figure the error between the projected position and the centroid of the pixel should anyway be enough to give you a feasible direction to offset from the centroid when you actually sample from the render target.
 
Last edited by a moderator:
I was working on an implementation that did not require pre-processed geometry, or checking if the edge was on a silhouette. However the trouble comes in determining how much of a given pixel is covered, the rest is pretty straight forward.

if you does not check silhouette, the AA is not likely work; because the pixel where many edges joining together will make a "hole" to your shader. I've built a program try to do that and failed.
the idea to check coverage of a pixel in that program is to assign each vertex to (1, 0, 0), (0, 1, 0), (0, 0, 1) respectively, and check if the pixel's value has any component close to 0. if you pre-multiply the vertex's value by the distant to its opposite edge, the coverage is simply any component below 1.
 
psurge:
unfortunately, such method does not work well on sub-pixel sized polygons. I hope LOD or impostors will help.

ShootMyMonkey:
I'm on vacation and think of the same problem. I've made a mistake on that. the faceSide is absolutely useless. i may make a new version :)
 
Back
Top