Does P.S. run multiple times under super sampling AA?

DaveBaumann said:
After something Richard Huddy said to me yesterday, it clicked what the point of centroid sampling was in R300 - the actually mentioned it in their marketting at the start, but it never panned out. Centroid sampling can be used to sample Alphas multiple times such that Alphas can recived some kind of texture sampling and work better with MSAA.

Detailed explanation please?
I don't understand how that would work out centroid sampling samples pixels from the center, how would that effect alpha textures?
 
OpenGL guy said:
Simon F said:
To answer the OP's question - Yes, but it also has to run multiple times in multisampling implementatons whenever a boundary crosses through a pixel.
I don't think so. For a single polygon with MSAA, each pixel with at least one sample covered by the polygon only gets a single color from the pixel shader. The samples covered by the polygon all get this color then depth checking is done to see what gets written. If two polygons intersect, it's the same deal: The samples covered by one polygon only get a single color from that polygon.

Doing supersampling on the edges only would give rise to strange filtering artifacts, such as when rendering a quad subdivided as two triangles.

You've misunderstood me. If you have N sub-pixels/pixel in a multisampling system then you can have up to N different polys in a pixel and hence up to N different PS operations.
 
Simon F said:
You've misunderstood me. If you have N sub-pixels/pixel in a multisampling system then you can have up to N different polys in a pixel and hence up to N different PS operations.
That's true, but still only one PS op per polygon.
 
991060 said:
thanks Humus and Simon, I think I have known the idea.

and BTW, why does nvidia still implement SSAA within their AA algorithm(when it's above 6xAA IIRC), given that NV3X is already in an inferior posiiton than R3XX in term of pixel(fragment) shading power?

Because their hardware related to AA is not as flexible as ATI's,
and it cannot
a) use more than 4 Z samples per color sample
b) align the samples so freely as ATI -> worse sampling patterns
 
DaveBaumann said:
Centroid sampling can be used to sample Alphas multiple times such that Alphas can recived some kind of texture sampling and work better with MSAA.

Thanks, this is neat information to know -- if accurate.
 
SirPauly said:
DaveBaumann said:
Centroid sampling can be used to sample Alphas multiple times such that Alphas can recived some kind of texture sampling and work better with MSAA.

Thanks, this is neat information to know -- if accurate.
I don't think this could possibly be the case.
 
hkultala said:
991060 said:
thanks Humus and Simon, I think I have known the idea.

and BTW, why does nvidia still implement SSAA within their AA algorithm(when it's above 6xAA IIRC), given that NV3X is already in an inferior posiiton than R3XX in term of pixel(fragment) shading power?

Because their hardware related to AA is not as flexible as ATI's,
and it cannot
a) use more than 4 Z samples per color sample
b) align the samples so freely as ATI -> worse sampling patterns

c) And they resolve in gamma space, not in linear space, which leads to the wrong final color value.
 
I actually don't see how centroid sampling could affect alpha sampling. I might be missing something (If Richard said so, then I need to think about it some more).

On the other hand, R3xx does support alpha-to-mask, which allows the alpha value to be used as a coverage mask for the fragment. This could be used to AA alpha edges within textures. However, this cannot be done automatically -- The application must enable it (though GL calls; I assume there's an equivalent in DX) and use it.

Edit: Well, I can see that the centroid sampling will adjust the sample position on the texture, and if that lines up with an alpha edges, it might reduce some aliasing artifacts due to center sampling. Overall, that should improve alpha edges, but it won't fix it completly. Alpha to mask is probably still a better solution. Now I'm curious to try it out...
 
Can you please keep us informed of the results Mr. Demers? :)

I'm looking forward to what you guys at Ati can do.
 
sireric said:
On the other hand, R3xx does support alpha-to-mask, which allows the alpha value to be used as a coverage mask for the fragment. This could be used to AA alpha edges within textures. However, this cannot be done automatically -- The application must enable it (though GL calls; I assume there's an equivalent in DX) and use it.
Unfortunately, alpha-to-coverage is not supported in DX (otoh, OpenGL doesn't support multisample bit masks - ah, I guess you can't have everything). But a2c shares some properties from both alpha blending and alpha test. When used just for punch-through textures, it can be considered order-independent (with some minor flaws), but just as with blending you need high-res textures if you want sharp-but-smooth edges instead of a blurry mess when you get close.

When you really want smooth alpha test edges, there's no way around taking several samples from the texture(s) that contribute to the final alpha value and doing the test several times per pixel, like supersampling does.

Edit: Well, I can see that the centroid sampling will adjust the sample position on the texture, and if that lines up with an alpha edges, it might reduce some aliasing artifacts due to center sampling. Overall, that should improve alpha edges, but it won't fix it completly. Alpha to mask is probably still a better solution. Now I'm curious to try it out...
Centroid sampling only makes a difference if an alpha edge lines up with an "outer" polygon edge, and that should be quite a rare case.
 
Certainly. The centroid sampling only makes a difference on edges intersecting alpha. For a finely tesselated area, might help. It does not replace alpha to mask. Still curious to see if there's a visual improvement.

As for alpha to mask, it cannot be used to get a generalized solution on alpha textures. The application must be aware and use it correctly. In fact, it only gives you the option of (n) alpha values, since it gets quantized to the number of samples you have within you pixels. Consequently, the alpha values within the transparencies must be constant, and variations are only usable for coverage.

Yes, a super sampling solution is a generalized solution for this problem. I don't think that SS is a good solution for high frequency problems in textures. Just for alpha texels. There are other partial solutions that could work (such as detecting alpha conditions and applying SS just to those pixels would be more reasonable).
 
sireric said:
Yes, a super sampling solution is a generalized solution for this problem. I don't think that SS is a good solution for high frequency problems in textures. Just for alpha texels. There are other partial solutions that could work (such as detecting alpha conditions and applying SS just to those pixels would be more reasonable).
It just occurred to me that you could probably use the gradients of the texture data to precisely calculate where the alpha crosses the alpha test threshold value. But then again, gradient instructions are only an approximation that may not be precise enough.

Applying SS "just to those pixels" requires either dedicated hardware or real, fast branching in the PS.
 
Xmas said:
sireric said:
Yes, a super sampling solution is a generalized solution for this problem. I don't think that SS is a good solution for high frequency problems in textures. Just for alpha texels. There are other partial solutions that could work (such as detecting alpha conditions and applying SS just to those pixels would be more reasonable).
It just occurred to me that you could probably use the gradients of the texture data to precisely calculate where the alpha crosses the alpha test threshold value. But then again, gradient instructions are only an approximation that may not be precise enough.

Applying SS "just to those pixels" requires either dedicated hardware or real, fast branching in the PS.
Assuming the app already uses at least bilinear filtering in conjunction w alpha testing (eg Max Payne 2 apparently does), you don't need gradients. You could evaluate the alpha test for some, or all samples, pre filter.

Doesn't make it much less complicated, of course :?
 
zeckensack said:
Assuming the app already uses at least bilinear filtering in conjunction w alpha testing (eg Max Payne 2 apparently does), you don't need gradients. You could evaluate the alpha test for some, or all samples, pre filter.

Doesn't make it much less complicated, of course :?
I don't think I understand what you mean.

If you wanted perfectly smooth alpha-test edges, you'd have to project the area of a pixel into texture space and calculate how much of the area is above and how much is below the given alpha test threshold, taking the selected filtering method into accound.

If you mean doing the test on any texel involved, resulting in an alpha value that corresponds to the ratio of texels that pass the test, I don't see how this is any better than just ordinary filtering. It has the same problems with magnification, and I think it would look worse.

The main problem is that antialiasing alpha test edges means you create partially covered fragments. And this inevitably leads to either back-to-front rendering or somehow turning the calculated coverage value into the number of samples written when multisampling, as does alpha-to-coverage.
 
So should a pixel pipeline have multiple pixel shaders to improve AA performance or is this such a rare case it is not worth it.
 
A nice solution to the alpha test problem would be a multisampling write mask in the pixel shader. If the write mask allows different outputs to different pixel samples, it would also solve aliasing problems with branching pixel shaders.

Of course, multisampling AA would have to be enabled for any benefit to be seen, and the problems would only be solved in proportion to the amount of antialiasing performed.

One would also need some way of calculating what samples within the pixel would need the mask. It may also be nontrivial to calculate which samples would need to be written to (or not written to).
 
rwolf said:
So should a pixel pipeline have multiple pixel shaders to improve AA performance or is this such a rare case it is not worth it.
Why not just increase pixel shading power in the general case? Then you get benefits with and without AA enabled.
 
Xmas said:
zeckensack said:
Assuming the app already uses at least bilinear filtering in conjunction w alpha testing (eg Max Payne 2 apparently does), you don't need gradients. You could evaluate the alpha test for some, or all samples, pre filter.

Doesn't make it much less complicated, of course :?
I don't think I understand what you mean.
Okay, let's take a simple texture lookup and no further fragment operations.
If the output alpha crosses the alpha test threshold somewhere inside a pixel (!=sample), it's likely that some of the samples (... that would have been generated by supersampling; that's the ultimate reference here) are below the threshold (fail), and some are above the threshold (pass).

If we were doing supersampling with a proper texture lod, these samples strongly correlate to single texels in the multisampled scenario (texel(lod n) ~=bilinear sample (lod n-1)). You'd have to apply some minor magic to make up for the filter weights. I don't yet know how, it was just a rough idea. I still think it can be done somehow and would work, even though I can't tell whether it would make any sense wrt the effort involved. Most certainly it wouldn't :D

Xmas said:
If you wanted perfectly smooth alpha-test edges, you'd have to project the area of a pixel into texture space and calculate how much of the area is above and how much is below the given alpha test threshold, taking the selected filtering method into accound.
Mine is a more cheapish approach ;)
Xmas said:
If you mean doing the test on any texel involved, resulting in an alpha value that corresponds to the ratio of texels that pass the test, I don't see how this is any better than just ordinary filtering. It has the same problems with magnification, and I think it would look worse.
Not quite. Instead pass the 'good' texels on to the filter, and emit zero (or whatever fails the alpha test) for the 'bad' ones.
The main problem is that antialiasing alpha test edges means you create partially covered fragments. And this inevitably leads to either back-to-front rendering or somehow turning the calculated coverage value into the number of samples written when multisampling, as does alpha-to-coverage.
I'm not too fond of alpha testing either :p
 
OpenGL guy said:
rwolf said:
So should a pixel pipeline have multiple pixel shaders to improve AA performance or is this such a rare case it is not worth it.
Why not just increase pixel shading power in the general case? Then you get benefits with and without AA enabled.

Good point.
 
zeckensack said:
Okay, let's take a simple texture lookup and no further fragment operations.
If the output alpha crosses the alpha test threshold somewhere inside a pixel (!=sample), it's likely that some of the samples (... that would have been generated by supersampling; that's the ultimate reference here) are below the threshold (fail), and some are above the threshold (pass).

If we were doing supersampling with a proper texture lod, these samples strongly correlate to single texels in the multisampled scenario (texel(lod n) ~=bilinear sample (lod n-1)). You'd have to apply some minor magic to make up for the filter weights. I don't yet know how, it was just a rough idea. I still think it can be done somehow and would work, even though I can't tell whether it would make any sense wrt the effort involved. Most certainly it wouldn't :D
Frankly, I don't see the problem this approach is trying to solve.
It doesn't take care of the order dependency issue (alpha-to-coverage does that), and it doesn't take care of the magnification issue. When doing minification, texture filtering with either alpha blending or a2c is already producing "perfect" edges. But when you get close, the edge is blurred. To solve this, you need to determine where (i.e. in which pixels, not between which texels) the threshold boundary lies. And that's where the gradients could help.
 
Back
Top