"ATi & nVidia Anti-Aliasing Performance"

My understanding is that MSAA and AF are designed to work hand in hand, therefore could you really run a fair AA comparison without it? After all, nVidia's SS would surely make it's AA look far closer to the quality of ATi's, right up until the point where you realise that switching AF on widens the gap once more.
 
Quitch said:
My understanding is that MSAA and AF are designed to work hand in hand, therefore could you really run a fair AA comparison without it? After all, nVidia's SS would surely make it's AA look far closer to the quality of ATi's, right up until the point where you realise that switching AF on widens the gap once more.
They weren't "designed to work hand in hand". They are independent techniques and were invented at different times. In fact, forms of anistropic texture filtering (eg see Ed Catmull's original paper) were used long before forms of isotropic filtering but the latter became popular because of Lance Williams's work (see "Pyramidal Parametrics").

MSAA relies on the fact that while edges of polygons have infinite frequency content, the texturing and lighting generally contain less "aliasing unfriendly" information that can be dealt with with cheaper forms of filtering.
 
Quitch said:
Yes, but doesn't MSAA & AF do the job of SS, but in a faster way... without touching Alpha textures?
Super-sampling does something entirely more to textures that AF cannot do. The look is smoother and it removes any shimmering artifacts that can come from not enough mipmapping. Back when the Serious Sam 2 demo was released, there was a -heck- of alot of this shimmering apparent. Turn on 4x FSAA on a Radeon 8500 and it was all gone. The image looked great -- though performance was crap. . .
 
Quitch said:
Yes, but doesn't MSAA & AF do the job of SS, but in a faster way... without touching Alpha textures?
No - the overall visual results may be similar or not, depending on how the pixel is being shaded.

In the simplest cases where you have simple texture inputs and lighting the results will generally be pretty close to indistigushable, but in cases where you introduce some highly uncorrelated dependence between the texture data and the shader output (such as in a dependent texture read) the results can diverge sharply.

As a simple example of how you can generate different results say that I have a case where I sample from texture map A a coordinate that I use to sample from texture map B, so my shading equation in pseudo code is simply :

Code:
new_coordinate = SAMPLE map A AT original_coordinate
out = SAMPLE map B AT new_coordinate

Now, let's say that texture map B contains a sin() function, that repeats over the texture coordinate range 0->1.

So, first the case of MSAA with texture filtering:

For convenience's sake, let's say that my sample footprint covers exactly 4 texels in texture A, and the values of those texels when I read them turns out to be 0, 0, 0, and 1

I then filter these texels and get (0 + 0 + 0 + 1) / 4 = 0.25.

I look up the result once in texture map B and I get sin(0.25 * 2 * PI) = 1

So my output pixel is white.

Now let's do it with supersampling. I now have 4 times as many texture samples, so my texture footprint for each sub-pixel sample covers exactly one of my original 4 texels from the MSAA example above, so I now sample the whole blending equation 4 times and get the following result

Subpixel 1 = sin(0 * 2 * PI) = 0
Subpixel 2 = sin(0 * 2 * PI) = 0
Subpixel 3 = sin(0 * 2 * PI) = 0
Subpixel 4 = sin(1 * 2 * PI) = 0
Output = (0 + 0 + 0 + 0) / 4 = 0

So my output pixel from the supersampled version is black.

This is an extreme example, but it shows how you can in actuality get completely different results from pre-filtering textures, and post filtering the results of the whole blending equation.
 
andypski said:
Theoretically speaking the result from supersampling is more correct.

Really! I thought exactly the opposite.

The SS version has no reconstruction filter on the original texture (assuming it was ment to be a continous function). The fact that you're then doing a transform on that function is neither here, nor there. You've already introduced the aliases from point sampling your original texture.

I'll admit I was baiting you to see what you'd say. :p Sorry 'bout that, but it's definately a good example of how a non-simple transform function can mean that filtering in screen space doesn't give filtering in texture space.
 
PSarge said:
Really! I thought exactly the opposite.

The SS version has no reconstruction filter on the original texture (assuming it was ment to be a continous function). The fact that you're then doing a transform on that function is neither here, nor there. You've already introduced the aliases from point sampling your original texture.

I'll admit I was baiting you to see what you'd say. :p Sorry 'bout that, but it's definately a good example of how a non-simple transform function can mean that filtering in screen space doesn't give filtering in texture space.

I figured you were baiting me :)

However...

If I was to take an infinite number of samples from the original texture in the region that i specified encompassing those 4 texels and summed the results of looking up those values in texture B then the answer from supersampling would be correct - it is the the act of limiting it to 4 samples that introduces the errors from a lack of filtering on the input.

Of course in real cases the original function could just as easily be a step function, rather than a continuous one, in which case it is the supersampling interpretation that is correct, and pre-filtering the input is completely wrong.

In real terms with current hardware you would get pre-filtering on the texture samples for the supersampling case as well, of course, but this makes the example much more complicated to explain.

[EDIT - fingers are all over the place today]
 
He didn't say what degree of supersampling was being applied :)

At an infinite degree of supersampling, I would have thought the result would be perfectly correct, while even infinite MSAA and anisotropic filtering cannot give the correct result :)

(edit: actually that might only true if the pixel shader function that uses the result of the lookup is linear?)
 
Dio said:
(edit: actually that might only true if the pixel shader function that uses the result of the lookup is linear?)

I think that's the key point. Looking at it from a signal processing point of view. The alias frequencies introduced when sampling the texture can be removed by the low-pass "supersampling" filter if the pixel shader function is linear (defined for now as "won't introduce any frequency shifts"). If it's not, those frequencies can be transformed into the range below the low-pass filter, and hence get into the visible image. Doesn't matter how many samples you take.

The reverse situation is also true. Take a texture. Use an Uberfilter to give a perfectly filtered result, and then input it to a non-linear transform function. Now your frenquencies that were all in range have all moved around and are going to give you aliases. You need to filter the result.

I don't think you can make do with just screen space filtering or just texture space filtering, no matter how many samples you take. Ideally, as Andy said, you want both texture filtering and screen space filtering.

SS + AF - The only way to fly! (Now that we've got pixel shaders)
 
None of the methods are "correct", because they use wrong mipmap in the second step.
If the right mipmap had been used, supersampling had been the correct one. Note that you still should use at least bilinear filter for each sample, and then select the mipmap for the second sample from the derivative of the first sample.

Another, more commonly occuring example would be bumpy reflections.
Let's say you're looking along a shiny surface with a normal map.
With prefiltering of the normal map, the resulting normal would be an average of the normals you filter over. This normal will have smaller changes than the real normal, and the result is that the water far away will (artificially) look calm and mirror-like.

However, if you run the pixel shader (reflection) for each sample and then average it, the reflection will (correctly) have a more fuzzy reflection when more waves contributes to one pixel.

SSAA isn't the ultimate solution to that, but it does enhance the image in a way that MSAA+AF can't.
 
Ailuros said:
In the majority of cases a higher resolution with a MSAA/AF combination cures most problems apart from alphas. Any form of Supersampling will restrict to specific resolutions only due to the fillrate impact. Try F1 2002 with 4xS in let's say 1024*768*32 vs. 4x sparse/16xAF in 1280*960*32 or higher. Supersampling is NOT the ultimate cure for everything. It's still just ONE axis that you'll get supersampling on with 4xS. Would we be talking about THE ultimate sollution present in todays hardware it would be 8xS (2x sparse + 4xOGSS) combined with 8xAF. But since resolutions and performance matter to me at least too, it's not that usable after all apart from some rare extremely CPU bound corner cases. High resolutions are out of the question too.

This article doesn't compare higher resolution MS/AF agains lower res SS/AF, it seems to compare MS/AF against MS+SS/AF at the same resolution.

I just said SS+MS+AF looks better than just MS + AF. I don't argue against AF at all.
 
PSarge said:
I don't think you can make do with just screen space filtering or just texture space filtering, no matter how many samples you take. Ideally, as Andy said, you want both texture filtering and screen space filtering.
Ah, but as soon as nonlinear functions are applied, AF may be making incorrect assumptions as well (e.g. when filtering a bump map). So that is equally broken ;)
 
Mephisto said:
Ailuros said:
In the majority of cases a higher resolution with a MSAA/AF combination cures most problems apart from alphas. Any form of Supersampling will restrict to specific resolutions only due to the fillrate impact. Try F1 2002 with 4xS in let's say 1024*768*32 vs. 4x sparse/16xAF in 1280*960*32 or higher. Supersampling is NOT the ultimate cure for everything. It's still just ONE axis that you'll get supersampling on with 4xS. Would we be talking about THE ultimate sollution present in todays hardware it would be 8xS (2x sparse + 4xOGSS) combined with 8xAF. But since resolutions and performance matter to me at least too, it's not that usable after all apart from some rare extremely CPU bound corner cases. High resolutions are out of the question too.

This article doesn't compare higher resolution MS/AF agains lower res SS/AF, it seems to compare MS/AF against MS+SS/AF at the same resolution.

I just said SS+MS+AF looks better than just MS + AF. I don't argue against AF at all.

That's true. I obviously got carried away; my apology.
 
Back
Top