The Future of Anti Aliasing? Will the Quincunx-Like Blur Technologies Return?

FrameBuffer said:
"I want my SSAA.."


At least there is options now with SSAA and can enjoy it at super high resolutions. What's amazing to me is the mixed mode of Super-AA and the way it performs at resolutions 1600 x 1200 to 2048 x 1536. The negative obviously is the cost of another card.
 
Alright we all know Quincunx looks like a blurry mess for the textures inside but what about for edges so instead of normal Quincux make it advance MSAA mode that blurs only edges then. I've found it works pretty good for some various things if I go back and manually blur edges or use an edge filter as a weight for gaussian blurring or something.
 
Basically if you use supersampling, Quincunx style overlapped filters are fine. They won't make textures more blurry. However, with MSAA the textures inside a triangle is already filtered, and use Quincunx style filters will result in over filtering, thus make them blurry.
MSAA with Quincunx applied on edges only could work, but it's probably too complex for current hardwares. And really 4X FSAA doesn't really benefit that much from a overlapped Gaussian filter.
 
Wouldn't it be trivial to find the edges by working with the z-buffer? Either through a simple shader program or by just making a reverse copy and then shifting it x and y and then subtract from the original z-buffer.
Or is that to many operations to be feasible?
 
Last edited by a moderator:
Uisng Z buffer to find edges is not very robust. I don't know how well it applies to anti-aliasing, though.
And yes, if for each pixels you need to do these computations, it's probably better to just use super-sampling.
 
Simon F said:
IIRC, those were just box filters.

I tended to believe so, but wasn't sure. A layman's eye can't possibly detect things like that anyway on such small pictures. The only other important detail I've kept in mind from that one is that you don't necessarily need a ludicrous sample density to get satisfactory results.
 
pcchen said:
Basically if you use supersampling, Quincunx style overlapped filters are fine. They won't make textures more blurry. However, with MSAA the textures inside a triangle is already filtered, and use Quincunx style filters will result in over filtering, thus make them blurry.
MSAA with Quincunx applied on edges only could work, but it's probably too complex for current hardwares. And really 4X FSAA doesn't really benefit that much from a overlapped Gaussian filter.

How about a fragment AA based algorithm?
 
Well, if all you want to do is blurring the edges, you need to store the depth *and* normal per-pixel. For those here that have GPU Gems 2, I suggest reading the article on STALKER's deferred rendering scheme in there. While I'm personally not a big fan of that in general, some of the advantages they get out of it are pretty damn cool. Worth the read, definitively!

Uttar
 
Ailuros said:
How about a fragment AA based algorithm?
IIRC, the Fragment AA algorithm is just MSAA + lossy compression. As such, blur filters should have pretty much the same effect with Fragment AA as with plain MSAA.
 
Fragment AA was "invented" by Matrox on Parhellia series which antialiases just edges and nothing else. Don't know what it makes different than MSAA but i guess MSAA does some sort of texture filtering too, otherwise it would be the same thing, just name different.
 
RejZoR said:
Fragment AA was "invented" by Matrox on Parhellia series which antialiases just edges and nothing else. Don't know what it makes different than MSAA but i guess MSAA does some sort of texture filtering too, otherwise it would be the same thing, just name different.
MSAA does not affect texture filtering.

Fragment AA only calculates multiple samples at polygon edges, and these are stored separately in an area of memory with limited size. Because it does not store multiple samples for interior pixels, it cannot handle polygon intersection edges, neither does it allow multisample masking effects. And if the buffer for additional samples overflows, no more edges get antialiased.
 
arjan de lumens said:
IIRC, the Fragment AA algorithm is just MSAA + lossy compression. As such, blur filters should have pretty much the same effect with Fragment AA as with plain MSAA.

I'm aware of that, but I was thinking rather in the direction that it could be maybe easier to apply any added filters on just polygon edge/intersections there, since pcchen mentioned that it would be too complex for current HW.
 
Ailuros said:
I'm aware of that, but I was thinking rather in the direction that it could be maybe easier to apply any added filters on just polygon edge/intersections there, since pcchen mentioned that it would be too complex for current HW.
Not really. Fragment AA does store more explicit polygon edge information than plain MSAA, but it should be quite straightforward to recover the exact same edge information from plain MSAA by just checking whether all samples of a pixel have the same color or not (if they don't, then there is a polygon edge in the pixel; this gives false hits with transparency-AA, though.)
 
arjan de lumens said:
Not really. Fragment AA does store more explicit polygon edge information than plain MSAA, but it should be quite straightforward to recover the exact same edge information from plain MSAA by just checking whether all samples of a pixel have the same color or not (if they don't, then there is a polygon edge in the pixel; this gives false hits with transparency-AA, though.)
And it doesn't give you all edges.


I did some experiments with storing "connectivity" information along with the pixel color some time ago. I.e. take a sample on each of the four pixel edges, and if it belongs to the same triangle store a 1. In reality you could get that information from the rasterization process already.
For the "downsampling" I took the current pixel color and for each connectivity flag added that color again if it's 1, or add the color of the respective neighboring pixel if it's 0, then divide by 5.

It worked surprisingly well for simple random triangle situations. But the quality of near horizontal and near vertical edges is low.
And overall, I don't think an edge-selective filter is that good an idea.
 
Ailuros said:
How about a fragment AA based algorithm?
Many wet dreams have revolved around a "fixed FAA" but FAA just isn't fixable. The whole concept revolves around not doing anything to handle intersections, and if you want to "extend" it to handle intersections you'll have to start by throwing it all away and think of something entirely different basically.

I'm not saying that there couldn't possible be any general-purpose edge AA scheme that's more efficient than MSAA, but FAA is the wrong starting point for finding it IMO.
 
I think my favorite possibly better AA algorithm was mentioned on these forums not too long ago. I don't remember the name, but I can describe it here:

The starting point is MSAA. You take an MSAA algorithm, and tack on lossy framebuffer compression that assumes that you don't need the colors from more than about 3 triangles per pixel.

So, the basic idea is that instead of storing color and z-data for each sample per pixel, you store one color for each triangle, as well as a 2-component normal vector, depth, and coverage mask for each triangle. If we are talking about a FP16 framebuffer along with 24-bit precision on the depth/normal information, the number of bits required per pixel is: (# of triangles) * (64 + (24*3) + # of samples). For 16 samples and 3 triangles per pixel, this is 456 bits per pixel. You might want to promote this to 512 bits per pixel for alignment reasons, which means you could potentially store 32 bits per pixel within the same 512 bit footprint. All this for less than the memory footprint of 6 MSAA samples.

In this way, you can have very high sample density with relatively small memory footprint and bandwidth. Your primary performance limitation becomes how quickly you can perform depth tests.

This whole algorithm rests upon the assumption that you need colors from no more than 3 triangles per pixel to represent with reasonable accuracy the final pixel color. Pathological cases may break this assumption, but most of the time it should give excellent edge AA quality, but at a significant cost in required computing power.
 
Chalnoth said:
I think my favorite possibly better AA algorithm was mentioned on these forums not too long ago. I don't remember the name, but I can describe it here:

The starting point is MSAA. You take an MSAA algorithm, and tack on lossy framebuffer compression that assumes that you don't need the colors from more than about 3 triangles per pixel.

So, the basic idea is that instead of storing color and z-data for each sample per pixel, you store one color for each triangle, as well as a 2-component normal vector, depth, and coverage mask for each triangle. If we are talking about a FP16 framebuffer along with 24-bit precision on the depth/normal information, the number of bits required per pixel is: (# of triangles) * (64 + (24*3) + # of samples). For 16 samples and 3 triangles per pixel, this is 456 bits per pixel. You might want to promote this to 512 bits per pixel for alignment reasons, which means you could potentially store 32 bits per pixel within the same 512 bit footprint. All this for less than the memory footprint of 6 MSAA samples.

In this way, you can have very high sample density with relatively small memory footprint and bandwidth. Your primary performance limitation becomes how quickly you can perform depth tests.

This whole algorithm rests upon the assumption that you need colors from no more than 3 triangles per pixel to represent with reasonable accuracy the final pixel color. Pathological cases may break this assumption, but most of the time it should give excellent edge AA quality, but at a significant cost in required computing power.

Guaranteed-memory-footprint methods with lossy compression is something that I am still deeply sceptical to, as they tend to result in either invariance problems or a rendering result where occluded objects have a nonzero effect on the final image. Lossy compression is in general an area where you need to be very careful about which data you choose to throw away and do some rather elaborate analysis of the actual consequences of throwing away those data.

If you are however going to stick with such a lossy method, I'd suggest working on larger blocks, like 2x2 or 4x4 pixels; if one pixel then needs very many polygons (e.g. high-valence vertex in the middle of the pixel, which isn't at all uncommon), it can overflow into the unused polygon slots of some of the other pixels in the block. This doesn't solve the inherent problems of the method, but it should make them much less likely to actually arise in practice (the high-valence-vertex-in-pixel case is a common enough case to worry about, but two such vertices in adjacent pixels should be fairly rare). (also, keep in mind that in a modern GPU, you will never work on less than a 2x2 pixel block anyway, so you may as well orient your method around 2x2 blocks rather than individual pixels anyway.)
 
arjan de lumens said:
Guaranteed-memory-footprint methods with lossy compression is something that I am still deeply sceptical to, as they tend to result in either invariance problems or a rendering result where occluded objects have a nonzero effect on the final image.
I'd appreciate it if you could elaborate on the last parts regarding occluded objects. Thanks.
 
Back
Top