Anti-Aliasing... so far

OpenGL guy said:
You have to depth sort all alpha blended rendering. So if you have explosions, glass, etc. and then have to sort fences as well, then you've added complexity.

I'll probably shoot in my foot with another dumb question....but wouldn't it be possible to create some sort of two-pass mechanism for alpha blending in general?
 
Ailuros said:
I'll probably shoot in my foot with another dumb question....but wouldn't it be possible to create some sort of two-pass mechanism for alpha blending in general?
Not sure I understand. When alpha blending, the nearby objects need to be blended on top of the distant objects, thus you have to render from back to front.
 
Depth peeling is typically done under the control of an app rather than as an algorithm to implement in hardware. The difficulty with depth peeling is that the complexity is directly related to the number of potentially overlapping transparent surfaces, which is why you don't see it used a lot.

Oh, and I forgot to mention that it does not work with multisampling.

-Evan
 
Last edited by a moderator:
Ailuros said:
While transparency antialiasing is definitely a great thing to have, it's not the absolute eulogy either at all times. As Wavey said if the amount of alpha test data is extremely high you'll lose almost as much performance as with using full-scene supersampling.

Still, given that the IHV's have limited input into what the developers choose to do, you'd rather have it in the toolbox than not, right? Particularly if you get to choose when to use it?

I just look at all the nay saying on this technique or that (and, I should be clear, I am sooo not technically capable of gainsaying the objections --and in fact I usually assume based on the poster they know what the heck they are talking about), and it seems relatively obvious to me that the a group of techniques --certainly in the short term-- are more likely to make a significant advancement in a large number of individual games while the search for "the holy grail" continues.
 
I think a fixed sampling pattern is just fine and that the focus should be on more samples. Here's what's Nvidia's Larry Gritz had to say concerning the method they use in Gelato.

At low sampling densities, the regular sampling of the hardware rasterization
shows egregious aliasing, but the superiority of stochastic sampling becomes negligible
surprisingly quickly. In real-world examples, noticeable artifacts are even less visible than in
pathological examples such as this test pattern.

Gelato uses the GPU to supersample with a fixed pattern and the frame buffer is used as a texture so various downsample filters can be supported.

The quote is from a Siggraph course.
http://www.csee.umbc.edu/~olano/s2005c37/

Gelato actually uses depth peeling, but they're not worried about realtime performance and don't use multisampling.
 
geo said:
Still, given that the IHV's have limited input into what the developers choose to do, you'd rather have it in the toolbox than not, right? Particularly if you get to choose when to use it?

Developers are aware that even the lowest end cards today (and in their majority since at least 2002) support multisampling; would it not be more reasonable to avoid alpha tests as much as possible?

I just look at all the nay saying on this technique or that (and, I should be clear, I am sooo not technically capable of gainsaying the objections --and in fact I usually assume based on the poster they know what the heck they are talking about), and it seems relatively obvious to me that the a group of techniques --certainly in the short term-- are more likely to make a significant advancement in a large number of individual games while the search for "the holy grail" continues.

I love transparency AA; but that's still not what I'm aiming at. If we'd have less alpha test textures in games I could invest that spare fillrate/performance elsewhere. In a game like HL2 it's silly; but how about Fear or CoD2?

Not sure I understand. When alpha blending, the nearby objects need to be blended on top of the distant objects, thus you have to render from back to front.

As I said dumb idea :(
 
OpenGL guy said:
Sure, but I don't see anyone supporting it yet.

Some workstations apps do. Usually, when you need to support correct translucency and don't worry too much about speed that's as good a solution as you can get.
 
arjan de lumens said:
It's a bit psychological: while the issue of aliasing is well understood and a known problem, an anti-aliasing method that fails in an obscure corner case represents an unknown risk. And people ABHOR unknown risks, no matter how small they turn out to actually be.
Thanks arjan. Not only did you explain why Z3 is such a big risk to implement commercially while (also) having a small risk of noticeable failure, you also explained why so many great software ideas have failed to materialize commercially. I couldn'tve said it any better. As for Z3 being commercially available, well, it was quite close.
 
Ailuros said:
I'm not sure anymore if it was Simon of someone else, that said that he got quite satisfying results with some sort of semi-stochastic 16x sample method (albeit I think it was supersampling). Memory is vague so I might be entirely wrong.
I tried jittered sampling quite a while ago, if that's what you mean.
 
Ailuros said:
I'll probably shoot in my foot with another dumb question....but wouldn't it be possible to create some sort of two-pass mechanism for alpha blending in general?
And besides, when there was hardware in the PC space that did automatic, per-pixel translucency sorting in hardware, the ISVs didn't make use of it. (i.e. the applications still spent time sorting the polygons :rolleyes: )
 
There's nothing wrong with feeling dumb more than often when it comes to 3D, especially as a layman. At least I know and acknowledge my shortcomings ;)
 
Ailuros said:
I'll probably shoot in my foot with another dumb question....but wouldn't it be possible to create some sort of two-pass mechanism for alpha blending in general?
Not really. For correct alpha blending, you NEED the sorting - trying to track e.g. per-pixel opacity doesn't work for more than 1 layer unless you KNOW that all layers have the same color. If I have two transparent layers, then try to slip a third one between them, an ordinary RGBA/Z/Stencil framebuffer doesn't track nearly enough information for me to correctly blend the third layer against both of the other ones.

The sorting would also seem to me to require omega(N) space per pixel if you have N layers of transparency, which makes me wonder: did the old PowerVR chips with auto-sorted alpha blend have any restrictions on the number of layers that they could handle?

As for Mali: the 16x AA mode in Mali110/55 currently only uses an 8x8 grid; Mali200 is 16-queen. All Mali cores support "transparency antialiasing" (SS and MS variants) too and have been doing so for a long time.
 
Simon F said:
And besides, when there was hardware in the PC space that did automatic, per-pixel translucency sorting in hardware, the ISVs didn't make use of it.

I tried to explain that three years ago to one of my coworker (rather tech savvy otherwise) at my previous company, and he didn't believed me, thought I was bullshitting him.
 
AFAICS it's perfectly possible to use coverage masks and still guarantuee that the algorithm will degrade no further than MSAA with the same number of samples, simply guarantuee that if a surface covers a MSAA subpixel that it gets a storage slot.
 
In my view it would be a shame to limit the "least relevant triangle" algorithm to just 8x. 16x would be nice (2 bytes per triangle - still with four triangles, say).

The vast majority of AA'd pixels only contain fragments of two triangles - don't they?

Jawed
 
MfA said:
AFAICS it's perfectly possible to use coverage masks and still guarantuee that the algorithm will degrade no further than MSAA with the same number of samples, simply guarantuee that if a surface covers a MSAA subpixel that it gets a storage slot.
If you provide an absolute guarantee that there will be enough storage slots, then what you describe is nothing else than plain MSAA + a simple lossless compression scheme, which, as far as correctness is concerned, is perfectly OK and probably similar to what IHVs are already doing.

If you don't provide that guarantee, but merely assume that a given number of slots would be enough, then you will get something that works most of the time but fails in subtle ways once you run out of slots. IMO, if you run out of slots, you should allocate additional slots, not throw out the stuff that didn't fit into your initial set of slots.

2 slots is usually enough for pixels that just have a polygon edge going through them, but for pixels with vertices in them, you should probably have at least 6-8 slots if you want to be anywhere close to safe.
 
arjan de lumens said:
If you don't provide that guarantee, but merely assume that a given number of slots would be enough, then you will get something that works most of the time but fails in subtle ways once you run out of slots.
I disagree.

If a fragment which covers the footprint of a MSAA sample gets bumped off the list you can simply add it's area to the owner of that MSAA sample (ie. the surface which actually covers the centroid). At worst you will just get back to the situation where the owners of the MSAA sample have complete coverage within that sample, and no coverage outside it ... which effectively will give you the same result as MSAA. At best not every MSAA sample is covered and/or there are not enough visible surfaces within the pixel to have reduced the accuracy of the fragment coverage information ... and your edge anti-aliasing accuracy is determined by the accuracy of the coverage masks.
 
Back
Top