SSAA vs. MSAA debate

poly-gone said:
Same resolution, otherwise it would become no different than super-sampling. I sample 4 pixels at a time, in 4 sets, thus giving me a 16 tap kernel and average them.
poly-gone, have we convinced you now that you aren't doing MSAA at all?

The whole point of MSAA is to approximate SSAA by still using a higher resolution Z-buffer, higher resolution colour buffer, and doing all the extra rasterization of supersampling, but doing all the hard stuff (i.e. pixel shading, texture fetches, etc) only once per output pixel. Downsampling still occurs as well in the same way as SSAA, except most of time MSAA is downsampling identical data.

Your method doesn't have a higher resolution Z-buffer, and isn't getting higher resolution edges from a higher sample count during polygon rasterization. So basically your method doesn't resemble MSAA in any way. Uttar was completely justified in berating your method.

By the way, regarding bandwidth consumption at higher resolutions:
Scanout bandwidth at 1600x1200 is 340MB/s more than 1024x768 at 75Hz, i.e less than 1% of the bandwidth for high end cards. According to forum members at ATI, ROP:Texture bandwidth is about 1:1, and the increased magnification can easily reduce texture bandwidth per pixel by 10-30% (except for post-processing). The saving will be well into the GB/s range.
 
ShootMyMonkey said:
The general idea with supersampling is that you're trying to raise the Nyquist limit of your sampling grid so that it is at least as high as the resolution of your final output. That's what sort of makes MSAA not really *true* antitaliasing in my book. SSAA is more correct.
A lot of people seem to misuse the Nyquist criteria when they're talking about anti-aliasing.

The spatial frequency of a polygon edge is infinite. You will never reach the Nyquist rate. For the interior of a polygon, it depends on the variance of the shader's output colour with position. Usually the variance is low enough (e.g. with normal mip-mapped texturing and good anisotropic filtering) that it pretty much matches your screen resolution, in which case MSAA is pretty close to "correct". Supersampling barely helps at all, since mip-mapping has already made a very good approximation of the texture data integrated over the pixel's bounds. In other shaders, like a bumpy surface with a specular reflection, the variance is much higher.

You have to remember that the graphics card can't sample the output image that we should see, but rather only the infinite resolution of the perfect image. Fortunately, because our image is going on a finite resolution display, we don't need a perfect reconstruction of the image's data. For each pixel, we are approximating the integral of the perfect image over that pixel, and our ability to distinguish different colours limits the accuracy required for that integral.

In the end, all this talk of the Nyquist rate is pretty senseless unless you have the specifics of a particular shader. Then you can decide how much shader supersampling you need, if that's the route you choose to take to reduce aliasing.
 
full-screen AA and anti-aliasing within primitives

Chalnoth said:
Well, MSAA is only anti-aliasing for edges. So it's not true full-screen anti-aliasing (in that while it is applied to the full screen, it doesn't anti-alias the full screen). But it is definitely anti-aliasing, as it does increase the resolution of your sampling grid by the number of samples at triangle edges.

Originally, I believe that the term "full screen anti-aliasing" was coined to distinguish MSAA and SSAA, which anti-alias all primitives, from techniques where you draw some of the primitives in an anti-aliased mode that blends the edges into the existing pixels, e.g. OpenGL's anti-aliased primitive modes. AA lines work great, but I never heard of anyone drawing AA polygon primitives, because when two polygons meet at an edge they need to totally obscure everything behind them, and blending-style AA techniques can't do that.

Of course, back then anti-aliasing was pretty much all about edges -- unless you had a lot of money to spend (or a lot of time for offline rendering), Gouraud shading was as good as it got. As I see it, the harder problem of avoiding artifacts for the colors computed within a primitive has two broad classes of solutions:

1) Supersampling: it's a good solution because it's easy to use, but the rendering time goes up as the square of the resolution. For example, if you sharpen a s specular highlight to half the width, you need 4 times as many samples to render it at the same quality.

2) Algorithmic filtering: this can give much better results, e.g. adjusting the exponent for a specular highlight based on the pixel spacing, so that it gets broader as the pixels get farther apart (and thus doesn't fall between the cracks). The problem with this approach, of course, is that someone has to work out the algorithms...

Then there's a third approach, which is to use both. I believe that most shadow mapping algorithms comnbine super-sampling with algorithmic techniques to reduce the number of samples required.

Enjoy,
Aranfell

PS: I'm bemused at someone calling a filtering algorithm "software MSAA" -- it doesn't seem to have may points of comparison with hardware MSAA. One could write a true software MSAA algorithm in order to compare MSAA image quality to SSAA image quality, but one couldn't use it to draw any conclusions about hardware performance.
 
aranfell said:
Originally, I believe that the term "full screen anti-aliasing" was coined to distinguish MSAA and SSAA,
Well, "Full scene anti-aliasing" (full-screen is a misnomer) is much, much older than the multisampling we know and love today. From what I remember, it was coined to distinguish supersampling from edge anti-aliasing, which required software support.

Multisampling came later, and can still be distinguished from those early edge anti-aliasing algorithms because it is both transparent to software, and the algorithm is applied to the entire scene.

So really, the term FSAA really became obsolete a number of years before mutlisampling hit the PC graphics scene.

P.S. Bear in mind that the algorithm which we call multisampling today is not the only definition of multisampling. For example, many have called 3dfx's technique used in the V4/5 multisampling (the discrepancy being related to how the pixel samples are generated: the V4/5's supersampling isn't simply a scaled-down, higher-resolution image).
 
i believe the term multisampling (multisample antialiasing) dates from SGI's RealityEngine (1992). the term has been associated with the vendor-specific opengl extension since at least 1994.
 
Mathematically which situation will outperform the other depends completely on where the bottleneck is (whether the app is bandwidth or fillrate limited). I seem to recall banging my head against the wall several years ago with 3dfx zealots who insisted on benchmarking cpu limited titles.

Either way its clear the benefits that msaa has over ssaa, especially if you include alpha checks, I think its clear that its a vastly preferable hardware solution until such time as fillrate/bandwidth no longer becomes an issue (say the bottleneck moves completely to the vertex and pixel lighting stage).
 
Fred said:
Either way its clear the benefits that msaa has over ssaa, especially if you include alpha checks, I think its clear that its a vastly preferable hardware solution until such time as fillrate/bandwidth no longer becomes an issue (say the bottleneck moves completely to the vertex and pixel lighting stage).
Unlikely to ever happen.
 
aranfell said:
AA lines work great, but I never heard of anyone drawing AA polygon primitives, because when two polygons meet at an edge they need to totally obscure everything behind them, and blending-style AA techniques can't do that.
I think the original Tomb Raider did this kind of AA. Glide, like OpenGL, has the concept of an edge flag that indicates whether an edge is a boundary or non-boundary edge. grAADrawTriangle() took 3 vertices and 3 edge flags. Calculating the edge flags added some work for the CPU, but it was far less than software rasterizing, so the hardware was still much faster with AA.

1) Supersampling: it's a good solution because it's easy to use, but the rendering time goes up as the square of the resolution. For example, if you sharpen a s specular highlight to half the width, you need 4 times as many samples to render it at the same quality.
I still wouldn't call that square of the resolution, because resolution for a 2D-image is samples per area. ;)

Fred said:
Either way its clear the benefits that msaa has over ssaa, especially if you include alpha checks, I think its clear that its a vastly preferable hardware solution until such time as fillrate/bandwidth no longer becomes an issue (say the bottleneck moves completely to the vertex and pixel lighting stage).
Err, did you swap MSAA and SSAA here?
 
The spatial frequency of a polygon edge is infinite. You will never reach the Nyquist rate.
That's true, but the sampling of it is not infinite in frequency. If you read, I said the Nyquist limit of the sampling grid, not the information that's being sampled.

Usually the variance is low enough (e.g. with normal mip-mapped texturing and good anisotropic filtering)
I think the "good" part is the wishful thinking aspect.
 
ShootMyMonkey said:
That's true, but the sampling of it is not infinite in frequency. If you read, I said the Nyquist limit of the sampling grid, not the information that's being sampled.
Huh? You're not sampling the sampling grid, so how does the Nyquist limit apply?
 
Huh? You're not sampling the sampling grid, so how does the Nyquist limit apply?
Who said anything about sampling of samples? I'm talking about the superresolution sampling grid which samples the spatial representations (i.e. that you're effectively rendering at a higher resolution). For instance 4xSSAA, the sampling grid is double the resolution (in both axes) as the final image. The Nyquist limit of that superresolution sampling grid would be the resolution of the final image. Which in turn means that you've sampled at high enough of a frequency that you can represent detail information on the same scale as a single pixel in the final image.
 
Last edited by a moderator:
ShootMyMonkey said:
Who said anything about sampling of samples? I'm talking about the superresolution sampling grid which samples the spatial representations (i.e. that you're effectively rendering at a higher resolution). For instance 4xSSAA, the sampling grid is double the resolution (in both axes) as the final image. The Nyquist limit of that superresolution sampling grid would be the resolution of the final image. Which in turn means that you've sampled at high enough of a frequency that you can represent detail information on the same scale as a single pixel in the final image.
The Nyquist limit doesn't apply in this instance because the samples are being averaged, and the frequency of the samples is a set multiple of the frequency of the final display.

In other words, it is not possible to add aliasing to the image when downsampling to the final frame.
 
Chalnoth said:
Bear in mind that the algorithm which we call multisampling today is not the only definition of multisampling. For example, many have called 3dfx's technique used in the V4/5 multisampling (the discrepancy being related to how the pixel samples are generated: the V4/5's supersampling isn't simply a scaled-down, higher-resolution image).

i believe people perceiving vsa100's fsaa as multisampling was a misunderstanding at the time.. anyway, it is a clear example of accumulation-buffer ssaa, as long as the sampling offsets are within the pixel unit (as it was technically possible for them to be 4 pixels away, IIRC).
 
Last edited by a moderator:
darkblu said:
i believe people perceiving vsa100's fsaa as multisampling was a misunderstanding at the time..
Well, no, I don't think so. It's just a different definition of multisampling than that which is common today.
 
I guess that according to that meaning, "multisampling" was for rendering at target resolution with multiple samples (hence the name), in contrast to NV1x supersampling, which rendered "supersize" then downsampled.

Nowadays we'll still say the latter is supersampling, and may further call it oversampling; while supersampling applies to 3dfx method and other kind of RGSS too.

I get it right? :)
 
ShootMyMonkey said:
Who said anything about sampling of samples? I'm talking about the superresolution sampling grid which samples the spatial representations (i.e. that you're effectively rendering at a higher resolution). For instance 4xSSAA, the sampling grid is double the resolution (in both axes) as the final image. The Nyquist limit of that superresolution sampling grid would be the resolution of the final image. Which in turn means that you've sampled at high enough of a frequency that you can represent detail information on the same scale as a single pixel in the final image.
You're just proving my point about not understanding the applicability (or lack thereof) of the Nyquist limit.

The maximum spatial frequency of a 1600x1200 resolution is 800 cycles across and 600 vertically. You double that (a la Nyquist) and you're back to your original resolution. Basically you've gone in a circle, while saying nothing useful about AA.

Full screen supersampling is next to useless. You get a very marginal increase in quality over MSAA, unless a specific shader needs it. Then you supersample that part, or change your shader. You could analysis the shader and figure out the Nyquist limit if you want, but there are no blanket statements that you can make.
 
The maximum spatial frequency of a 1600x1200 resolution is 800 cycles across and 600 vertically. You double that (a la Nyquist) and you're back to your original resolution.
You've just pointed out on your own exactly why it does apply. There's more information gathered when sampling at the higher resolution. The averaging of the samples is equivalent to a downsampling of the superresolution image (e.g. sampling a texture at a point where 4 texels meet). If you had not gathered any extra information against the original image resolution, then there would be no difference against not having AA at all. 4x means you've at least gathered a sufficient (or barely sufficient) set of sampledata on the objects. You can think of it again with 16x... the sampling can cover a frequency range that much greater, and thus the average which brings it down to the target resolution was based on information that covers a significantly higher frequency range.

Full screen supersampling is next to useless. You get a very marginal increase in quality over MSAA, unless a specific shader needs it.
I seriously doubt that there are very many MSAA implementations that are as sophisticated as you're making them out to be, though that may just be my cynical nature and my inherent distrust of what companies will tell you is "good enough". And full screen supersampling does have its place.
 
ShootMyMonkey:
You probably missed that Mintmaster talked about a final resolution of 1600x1200.
Read his post again with that in mind.

Note: everybody here agrees that sampling with a higher frequency than the final output will raise the Nyquist frequency (Nf) for that sampling. The subsequent averageing will work as a low pass filter that filters out (partly) the frequencies that lies between the Nf of the final image and the Nf of the supersampled image. (You could actually make that (partly) the frequencies from Nf{final} to 2*Nf{super}-Nf{final}.)
We agree on that part.

But there's nothing "barely sufficient" wrt Nyquist's theorem with doing supersampling (or MSAA) with a factor two in each direction.
 
To return to an earlier point about the computational costs of SSAA vs. MSAA...

The basic problem of ordered-grid supersampling is that if you want to render a feature 1/Nth the size at the same quality, you need N^2 as many samples. Consider a thin polygon that rotates on the screen -- if it were just horizontal or just vertical, you could get by with N times as many samples, but when it can be at any angle, you need N^2 as many samples (on an ordered grid).

One solution is to use a non-ordered grid. This allows you to support features 1/Nth the size with just N times as many samples, at a quality approaching using N^2 samples on an ordered grid. But there are some serious computational issues involved with non-ordered-grid supersampling, because a lot of algorithms (e.g. texture filtering and mip level calculations) depend on the ordered grid.

Multi-sampling can use a non-ordered-grid because only one color is computed per pixel, rather than one per sample, so the colors are still computed on an ordered grid (provided one can solve center vs. centroid issue). This lets you get full scene edge AA at quality approximating ordered-grid N^2 samples per pixel with only N samples per pixel, with minimal extra color comptuation time. Given a good compression algorithm, the increase in memory bandwidth required is a lot less than N as well.

Note that I'm assuming here that both SSAA and MSAA involve filtering from the sample resolution to the pixel resolution after rendering. This post-filtering is what distinguishes "super-sampling" from ordinary rendering.

Also note that I'm mostly talking about edge AA here. I don't think non-ordered-grid helps much when anti-aliasing a speciular highlight that is 1/.Nth as wide. For that problem, I believe that one eithr needs more samples (at an N^2 cost) or an algorithm that spreads the speciular highlight based on the sample or pixel spacing.

Aranfell
 
Back
Top