Anti-Aliasing... so far

Chalnoth said:
No, not really. Here's the way I envison the process:
  • Triangle 1 hits the pixel we're looking at. Since there will be a trivial z-pass, a coverage mask will be applied that indicates what portion of this pixel is covered by this triangle.
  • Repeat step 1 as above until the maximum number of triangles is reached.
  • The N+1 triangle is rendered to the current pixel. The z-comparison is done. The new coverage mask is examined, and the triangle that contributes to the smallest number of samples is discarded.

Note that this hole will now be smaller than any other triangle in the list, so it can never be filled except if other triangles are occluded.
Umm, now that you have produced a hole, you have produced a situation where you know NOTHING about the samples that used to be in the hole. As such, filling the hole again with anything at all at any time is not safe, even if you later free up a slot in the list.

And leaving the hole unfilled means that if you later cover the pixel with 2 polygons sharing an edge (covering the pixel completely), at least one of the polygons will have lost some samples, disturbing the final pixel's color.
 
arjan de lumens said:
Umm, now that you have produced a hole, you have produced a situation where you know NOTHING about the samples that used to be in the hole. As such, filling the hole again with anything at all at any time is not safe, even if you later free up a slot in the list.

And leaving the hole unfilled means that if you later cover the pixel with 2 polygons sharing an edge (covering the pixel completely), at least one of the polygons will have lost some samples, disturbing the final pixel's color.
If the hole is occluded by an opaque surface, sure it can be filled. The problem arises when other triangles within the pixel are occluded, freeing up part of the triangle list, and something is written behind the hole.

But that wouldn't be so hard to make a fix for: once the triangle list fills up once, creating a hole, never write to that hole unless the triangle passes the z-test for at least one other sample within the pixel.
 
Chalnoth said:
If the hole is occluded by an opaque surface, sure it can be filled.
If the fragment that was removed had a much lower Z value than the other fragments (say, because it belonged to a different mesh), then you can get into the following situation:You write in a new polygon that ends up in front of all the retained fragments, but would have ended behind the removed fragment. At this point it is wrong to fill the hole with the new polygon, but you have no information that can help you in making that decision. If you do fill the hole and it THEN turns out that the deleted fragment belonged to an object that is completed later, you have a see-though error. It's not a common case, but it's not something that you can guarantee won't happen either. (Splitting up rendering of an object like this could happen if you have an object with multiple parts, with each part requiring a different render-state, and the application decides to sort its rendering by render-states).
 
arjan de lumens said:
If the fragment that was removed had a much lower Z value than the other fragments (say, because it belonged to a different mesh), then you can get into the following situation:You write in a new polygon that ends up in front of all the retained fragments, but would have ended behind the removed fragment. At this point it is wrong to fill the hole with the new polygon, but you have no information that can help you in making that decision.
Ah, except this is a minor issue: You're talking about the very tip of a foreground polygon not contributing to the pixel. This isn't so much a see-through error as it is non-optimal anti-aliasing.
 
Chalnoth said:
Ah, except this is a minor issue: You're talking about the very tip of a foreground polygon not contributing to the pixel. This isn't so much a see-through error as it is non-optimal anti-aliasing.
If you try to recycle the samples that used to belong to that tip, you get see-though errors in the case where the tip belonged to a mesh that was completed later AND we rendered a polygon that Z-wise was placed between the tip and the background mesh.

It's admittedly not a very common case, but coverage-based AA methods are cursed with this kind of subtle cases from here to infinity. Each time you patch one problem, another obscure problem pops up somewhere else in an endless game of whack-a-mole. It's a bit like trying to make a perpetual motion machine work or to make a complete set of axioms describing arithmetic - you keep getting stuff that ALMOST works perfectly and looks like it SHOULD work perfectly with only a couple more fixes....
 
arjan de lumens said:
If you try to recycle the samples that used to belong to that tip, you get see-though errors in the case where the tip belonged to a mesh that was completed later AND we rendered a polygon that Z-wise was placed between the tip and the background mesh.
I really don't see how. If it's part of a mesh that was completed later, how is this mesh not going to occlude the rest of the pixel?
 
Chalnoth said:
I really don't see how. If it's part of a mesh that was completed later, how is this mesh not going to occlude the rest of the pixel?
The mesh that was completed later ends up occluding the entire pixel EXCEPT the hole left by the tip. Now, the problem is what happens within the hole. The tip is gone, its Z data long since obliterated, and if you tried to recycle its samples in the mean time, you risk having the hole filled with a polygon that you weren't supposed to see in the first place.
 
Nope, the hole is not counted. It is AA noise to the tune of 1/8th of the colour value of the pixel.

If the tip had amounted to 2 samples in 8, until the time when rendering of the remainder of the object started in this second pass that you're hypothesizing, then it could not have been deleted from the triangle list, and therefore wouldn't be the subject of this debate.

So, you're making a mountain out of a mole-hill. The maximum error here is less than the maximum error in 4xMSAA.

Jawed
 
It is AA noise to the tune of 1/8th of the colour value of the pixel.
That's not too bad if most of your colors are close to each other. But imagine you're doing HDR+AA, you're inside of a dark tunnel and it's very bright ouside the tunnel. The fragments on the outside will have color values 1000+ times brighter than the colors on the inside of the tunnel.

1/8 of 1000 is a lot.
 
arjan de lumens said:
The mesh that was completed later ends up occluding the entire pixel EXCEPT the hole left by the tip. Now, the problem is what happens within the hole. The tip is gone, its Z data long since obliterated, and if you tried to recycle its samples in the mean time, you risk having the hole filled with a polygon that you weren't supposed to see in the first place.
Yes, I suppose that's correct. But we're still talking about a very small contribution to the final color.

Granted, with HDR, even a small contribution could lead to a dramatic artifact, so in that case you'd just adjust the algorithm a bit so that once a triangle is thrown out, those samples are never rendered to, as you stated.
 
Jawed said:
The maximum error here is less than the maximum error in 4xMSAA.

Jawed
The magnitude of the error is dependent on geometry that has been occluded. An occluded moving object can thus cause disturbances in a non-moving occluder. I am not primarily worried about the numerical magnitude of the error (for a still image I don't expect it to be a problem unless you do HDR) but rather how easy/hard it is to exploit.
 
Bob said:
That's not too bad if most of your colors are close to each other. But imagine you're doing HDR+AA, you're inside of a dark tunnel and it's very bright ouside the tunnel. The fragments on the outside will have color values 1000+ times brighter than the colors on the inside of the tunnel.

1/8 of 1000 is a lot.
No, because as I keep saying the "hole" sample will be discarded. It doesn't count. It isn't used in the AA resolve.

Jawed
 
arjan de lumens said:
The magnitude of the error is dependent on geometry that has been occluded. An occluded moving object can thus cause disturbances in a non-moving occluder. I am not primarily worried about the numerical magnitude of the error (for a still image I don't expect it to be a problem unless you do HDR) but rather how easy/hard it is to exploit.
The error only arises when the hole corresponds to exactly one sample. If it corresponds to 0 or 2+ samples then the hole cannot be created.

And as I have repeatedly explained, the hole doesn't exist because the sample is excluded from the AA resolve.

There is no hole. It's that simple.

Jawed
 
Chalnoth said:
Yes, I suppose that's correct. But we're still talking about a very small contribution to the final color.
That's probably true for this particular scenario. There are other scenarios (albeit contrived) where it is possible to produce an almost arbitrarily large error, but with this kind of AA schemes, you generally have to trade off algorithmic complexity versus memory footprint versus likelihood of disturbing errors, all while taking into account what scenes are likely, plausible or just ridiculous.

AFAICS, if you have N slots and feed N+1 triangles into them, then the upper bound on the error term introduced due to compression or sample removal is about 1/(N+1) * maximum difference in color value (somewhat dependent on actual sample count). If you try to force a larger number of polygons into the pixel, the error bound increases accordingly.
 
Jawed said:
And as I have repeatedly explained, the hole doesn't exist because the sample is excluded from the AA resolve.
Then the pattern of samples excluded from AA resolve, and thus the pixel's final color, is still ultimately dependent on occluded geometry in your scheme.
 
arjan de lumens said:
That's probably true for this particular scenario. There are other scenarios (albeit contrived) where it is possible to produce an almost arbitrarily large error, but with this kind of AA schemes, you generally have to trade off algorithmic complexity versus memory footprint versus likelihood of disturbing errors, all while taking into account what scenes are likely, plausible or just ridiculous.
Well, my main counter to this is that a robust implementation of an N tris/pixel technique would, in a worst-case scenario, be no worse than N-sample AA at reproducing the appropriate pixel color.

Another way of looking at it is, by the time you're dealing with scenes where a small portion of the samples in a pixel have a large contribution to the final color, you're going to have aliasing no matter what.
 
Chalnoth said:
Well, my main counter to this is that a robust implementation of an N tris/pixel technique would, in a worst-case scenario, be no worse than N-sample AA at reproducing the appropriate pixel color.
Yup. There are schemes that can provide such a guarantee. The one proposed by MfA does this, the one by Jawed doesn't. To get that guarantee under MfA's scheme, you need to track whether or not any of the non-"centroid" samples have been damaged (the "centroid" samples never are) and revert back to MSAA if they have.

Whether or not you need to revert back to MSAA or not for a pixel is however still dependent on occluded geometry, which potentially causes flickering.
 
arjan de lumens said:
Then the pattern of samples excluded from AA resolve, and thus the pixel's final color, is still ultimately dependent on occluded geometry in your scheme.
No, because instead of taking 4xMSAA samples, you're taking 7 in this case. The rogue triangle that causes the "hole" would have a 50% chance of not being sampled in 4xMSAA.

If the rogue triangle covered 2 samples, then it couldn't be excluded from the AA resolve - according to my theory of triangle-list management and sample re-enabling. No matter what time the rogue triangle is rendered, the fact it covers 2 samples means that any existing 2-sample triangles that it overlaps will become 1-sample or 0-sample (in both cases being discarded to make way for the rogue triangle).

This scheme is about being able to take more samples, not about being able to sample more triangles.

Because it can sample 4 triangles its worst case will always be no worse than 4xMSAA which is also limited to 4 triangles. The noise created by discarded-triangles that only cover 1 sample is the same as that found in 4xMSAA where those same triangles cover zero samples.

It's just a shame that the patent doesn't describe the entire chain of AA - focussing as it does solely on the concept of limiting the list to N triangles (four in the examples).

Jawed
 
Jawed said:
No, because instead of taking 4xMSAA samples, you're taking 7 in this case. The rogue triangle that causes the "hole" would have a 50% chance of not being sampled in 4xMSAA.
You are missing the point by a wide margin. If I render a mesh that causes some hole, then render another mesh on top of that, completely occluding the first mesh, then:
  • If I keep the second mesh still, MSAA produces the same result every time, regardless of the nature of the contents of the first mesh.
  • Your scheme doesn't.
 
It does, because my scheme will remove the first mesh's triangles from the triangle list.

As each occluding triangle is written, as long as it covers at least 2 samples, it will always take precedence over the first mesh's triangles.

Mesh-2 triangles that only cover 1 sample get dropped, but the corresponding sample on the mesh-1 triangle is also dropped.

If mesh 1 consisted of 4 triangles each with 2 samples and all of mesh-2's 8 triangles (worst case) were 1-sample in size, then the first four triangles in mesh-2 would be excluded from the AA (but would each delete one sample from mesh-1's triangles). But the second four to be drawn would each occlude one of the 4 remaining active samples - and since my scheme always deletes 0-sample triangles from the list, the last four mesh-2 triangles would entirely occlude the mesh-1 triangles.

The result is that 4 mesh-2 triangles are sampled - which is exactly what standard 4xMSAA would do in this case. The mesh-1 triangles cannot survive.

The 4 mesh-2 triangles that are sampled are random, based on rendering order and the location of the samples. Which is no different than if standard 4xMSAA had been used.

Jawed
 
Last edited by a moderator:
Back
Top