(Multisampling) antialiasing explained?

Zvekan

Newcomer
I have to write an article about current antialiasing methods and as my knowledge is limited, I'm looking about articles that are describing antialiasing methods (theory and implemetation) in detail.

I managed to find great Super-sampling Anti-aliasing Analyzed article here on Beyond3d, but it, obviously, only regards to supersampling methods. Also as it is written couple of years ago so is it representative of todays usage (in GeForce mixed modes and S3, XGI products)?

I heard about 3dCenters article about current implemenattions but I'm having difficoulty finding it.

Zvekan
 
Here's a basic explaination of the algorithm:

1. Divide the pixel up into N sub-pixels.
2. Do a Z-test on each sub-pixel to see if it is visible.
3. Run all per-pixel operations once on the pixel, in order to generate one output color (i.e. texture filtering, pixel shaders, whatever).
4. Output that color to all sub-pixels that passed the z-test.

That's how it's done. How is it actually implemented in hardware? The primary difference is that the hardware is capable of multiple Z tests per pixel pipeline (usually 2-4).

Now, some optimizations can be made to improve the quality. For example, the hardware could use centroid sampling to ensure that all texture samples are actually taken within the triangle that is to be output (i.e. without centroid sampling, you'd always use the center of the pixel for taking texture samples, but this may be outside the triangle to be rendered).

So, the most obvious improvement is that much less fillrate is actually used at very little loss in image quality compared to supersampling (video cards that use MSAA rely upon better texture filtering to handle textures better than previous video cards, and edge quality is just as good as SSAA).

There are other benefits as well. One of the biggest is that a framebuffer with which MSAA is used turns out to be much better-suited for use with compression techniques than a non-FSAA buffer. The fact that many sub-pixels outputted through MSAA will have identical colors means that color compression can be used to great effect. The fact that the z-buffer has increased in size, making the triangles cover more area makes the z-buffer more amenable to compression as well. Both of these factors have combined to reduce the performance hit as much as possible on the current generation of video cards (NV3x and R3xx).
 
Dave Barron wrote an article on MSAA for Firingsquad back in '01. The 3dfx white paper here at B3D might also be useful.
 
Thnx for taking time to write that Chalnoth.

John Reynolds - I couldn't find MSAA on firingsquad. There is fragmented AA and something about antialiasing on 3d basics article but thanks anyway. What 3dfx white paper are you refering to?

I think the info I could find should prove enough for a short article on current AA tehniques.

Zvekan
 
Zvekan said:
What 3dfx white paper are you refering to?

Zvekan

3dfx paid Dave Barron and Kristof Beets to write a white paper on anti-aliasing back in either late '99 or early '00. It used to be available here just a few months ago, but I'll have to ask what happened to it.
 
Errm...

Zvekan said:
I managed to find great Super-sampling Anti-aliasing Analyzed article here on Beyond3d, but it, obviously, only regards to supersampling methods. Also as it is written couple of years ago so is it representative of todays usage (in GeForce mixed modes and S3, XGI products)?
 
Xmas said:
Errm...

Zvekan said:
I managed to find great Super-sampling Anti-aliasing Analyzed article here on Beyond3d, but it, obviously, only regards to supersampling methods. Also as it is written couple of years ago so is it representative of todays usage (in GeForce mixed modes and S3, XGI products)?

Well article will be dealing with theory but implementation also. Now in the mentioned article they have two approaches: OGSS (via off-screen buffer and resolution scaling) and RGSS (via multiple instances of almost the same frame).

And I was asking if GeForce mixed modes are really multisampling and than added OGSS, if S3 DeltaChrome uses off screen buffers or in other words some info about actual implementation in current hardware.

Thnx,

Zvekan
 
Zvekan said:
Well article will be dealing with theory but implementation also. Now in the mentioned article they have two approaches: OGSS (via off-screen buffer and resolution scaling) and RGSS (via multiple instances of almost the same frame).
Just as an fyi, these are just theoretical constructs to visualize what the hardware is attempting to do. The actual framebuffer organization and whatnot doesn't need to have the exact characteristics of the above.

For example, it would be pretty inefficient to perform supersampling in a way such that spatially-local samples are far apart in memory (it makes some sense in the Voodoo5 architecture, but only for two samples, since the only V5 that was actually released had two chips).
 
Chalnoth said:
Zvekan said:
Well article will be dealing with theory but implementation also. Now in the mentioned article they have two approaches: OGSS (via off-screen buffer and resolution scaling) and RGSS (via multiple instances of almost the same frame).
Just as an fyi, these are just theoretical constructs to visualize what the hardware is attempting to do. The actual framebuffer organization and whatnot doesn't need to have the exact characteristics of the above.

For example, it would be pretty inefficient to perform supersampling in a way such that spatially-local samples are far apart in memory (it makes some sense in the Voodoo5 architecture, but only for two samples, since the only V5 that was actually released had two chips).

If it was inefficient but made sense for 2x on a voodoo5, how come 4x only had twice the performance hit of 2x?
 
Fox5 said:
If it was inefficient but made sense for 2x on a voodoo5, how come 4x only had twice the performance hit of 2x?
I meant to imply that the Voodoo5 didn't need to organize the framebuffer in that way.
 
Back
Top