Game development presentations - a useful reference

How could barycentric coordinates be useful for analytic anti-aliasing ?

I'm pretty sure analytic anti-aliasing is about determining the correct area coverage of each primitives lying inside the pixel boundaries so how are barycentric coordinates going to help us find the intersection area between primitives and the pixel ?
You know pixel size and then from coordinates the distance to edge at sample position, thus you can use VU-line like distance to edge AA.
http://www.diva-portal.se/smash/get/diva2:843104/FULLTEXT02.pdf
 
But distance to closest edge does not help so much if triangles become smaller and smaller. We would need to clip the whole triangle to the pixel to get area, which then is still incomplete for the missing other triangles.

To me, TAA is the best solution, and i do not notice artifacts when playing.
But i've learned that's not true for all. Some even hate TAA to death. Would be interesting to know how many people are sensible and affected here. And if ML could help them more than traditional TAA does, on the long run...
 
You know pixel size and then from coordinates the distance to edge at sample position, thus you can use VU-line like distance to edge AA.
http://www.diva-portal.se/smash/get/diva2:843104/FULLTEXT02.pdf

Distance to edge AA is completely different to analytic AA ...

In our standard rasterization pipeline, we have sample points that all have equal weight contribution to a pixel's final colour output. Let us consider this pathological example, suppose that we'll have 4 samples per pixel (4x MSAA) enabled. We have a pixel that lies fully within the red coloured triangle at a greater depth but within that pixel itself we also have 4 blue coloured subpixel triangles that altogether covers no more than a couple percentage of the total area of our pixel and they cover all of the sample points in the pixel at a lower depth. Regardless of whichever primitive gets tested against the samples, our 4 blue coloured subpixel triangles are always going to win because they cover those very same sample points at a lower depth. Now the main problem is that when we're adding these sample points together our final pixel colour output is going to be completely blue despite the pixel itself being mostly red in area! Can this be the exact solution ?

https://publik.tuwien.ac.at/files/PubDat_223424.pdf

In an analytic rasterization pipeline described above so the concept of sample points do not exist as it does in our standard pipeline. The pixel's final colour output is computed via by arbitrarily assigning weights proportionally to the area of the primitive that intersects with the pixel's area. Once we've figured out exactly how much of the pixel's area are covered by each primitives lying within it's boundaries we can now revisit our prior example again. Since we don't have sample points anymore to test against our primitives, this means that we can now consider the contribution of our red triangle which would normally be rejected by the standard pipeline. As we add up the contribution of each primitives colour based on their arbitrarily assigned weights when we output the pixel's final colour it'll be mostly red with a slight purple tint which is empirically consistent with our original input signal!

In conclusion, with standard rasterization we have the problem of where our primitives can over/under represent their contribution to the pixel's final colour since the area of the primitive does not need to be proportional to the weight of each samples. Analytic rasterization pipeline solves this problem by computing the exact visibility or the area where each primitives intersects with the pixel boundaries ...
 
Last edited:
Distance to edge AA is completely different to analytic AA ...

In our standard rasterization pipeline, we have sample points that all have equal weight contribution to a pixel's final colour output. Let us consider this pathological example, suppose that we'll have 4 samples per pixel (4x MSAA) enabled. We have a pixel that lies fully within the red coloured triangle at a greater depth but within that pixel itself we also have 4 blue coloured subpixel triangles that altogether covers no more than a couple percentage of the total area of our pixel and they cover all of the sample points in the pixel at a lower depth. Regardless of whichever primitive gets tested against the samples, our 4 blue coloured subpixel triangles are always going to win because they cover those very same sample points at a lower depth. Now the main problem is that when we're adding these sample points together our final pixel colour output is going to be completely blue despite the pixel itself being mostly red in area! Can this be the exact solution ?

https://publik.tuwien.ac.at/files/PubDat_223424.pdf

In an analytic rasterization pipeline described above so the concept of sample points do not exist as it does in our standard pipeline. The pixel's final colour output is computed via by arbitrarily assigning weights proportionally to the area of the primitive that intersects with the pixel's area. Once we've figured out exactly how much of the pixel's area are covered by each primitives lying within it's boundaries we can now revisit our prior example again. Since we don't have sample points anymore to test against our primitives, this means that we can now consider the contribution of our red triangle which would normally be rejected by the standard pipeline. As we add up the contribution of each primitives colour based on their arbitrarily assigned weights when we output the pixel's final colour it'll be mostly red with a slight purple tint which is empirically consistent with our original input signal!

In conclusion, with standard rasterization we have the problem of where our primitives can over/under represent their contribution to the pixel's final colour since the area of the primitive does not need to be proportional to the weight of each samples. Analytic rasterization pipeline solves this problem by computing the exact visibility or the area where each primitives intersects with the pixel boundaries ...
Yup, perhaps shouldn't have used the word there.
Habbit of thinking as it from old good days of wu-line AA.

Fun paper on NSAA, I might have missed it back in the days.
 
Distance to edge AA is completely different to analytic AA ...

In our standard rasterization pipeline, we have sample points that all have equal weight contribution to a pixel's final colour output. Let us consider this pathological example, suppose that we'll have 4 samples per pixel (4x MSAA) enabled. We have a pixel that lies fully within the red coloured triangle at a greater depth but within that pixel itself we also have 4 blue coloured subpixel triangles that altogether covers no more than a couple percentage of the total area of our pixel and they cover all of the sample points in the pixel at a lower depth. Regardless of whichever primitive gets tested against the samples, our 4 blue coloured subpixel triangles are always going to win because they cover those very same sample points at a lower depth. Now the main problem is that when we're adding these sample points together our final pixel colour output is going to be completely blue despite the pixel itself being mostly red in area! Can this be the exact solution ?

https://publik.tuwien.ac.at/files/PubDat_223424.pdf

In an analytic rasterization pipeline described above so the concept of sample points do not exist as it does in our standard pipeline. The pixel's final colour output is computed via by arbitrarily assigning weights proportionally to the area of the primitive that intersects with the pixel's area. Once we've figured out exactly how much of the pixel's area are covered by each primitives lying within it's boundaries we can now revisit our prior example again. Since we don't have sample points anymore to test against our primitives, this means that we can now consider the contribution of our red triangle which would normally be rejected by the standard pipeline. As we add up the contribution of each primitives colour based on their arbitrarily assigned weights when we output the pixel's final colour it'll be mostly red with a slight purple tint which is empirically consistent with our original input signal!

In conclusion, with standard rasterization we have the problem of where our primitives can over/under represent their contribution to the pixel's final colour since the area of the primitive does not need to be proportional to the weight of each samples. Analytic rasterization pipeline solves this problem by computing the exact visibility or the area where each primitives intersects with the pixel boundaries ...

My knowledge on the topic was never that detailed, and I admit I uderstood the two terms to be interchangeable until your distinction right now.
I thought analytical AA was more of a description of a (for lack of a better word right now - im drunk) a philosophy than an actual well defined algo.

Knowing that, distance to edge is still already a hell of an improvement over what we have now. Also, even analytical,.from your description, does not sound like a silver bullet, since, as I understood it, it evaluates the coverage of each primitive individually, but does not precisely consider the effect of how different primitives oclude each other. I'm assuming it tries its best to make an educated guess there, but it still not 100% mathematically correct, even though I suppose the error is negligible for IQ, specially at modern resolutions.
 
Knowing that, distance to edge is still already a hell of an improvement over what we have now. Also, even analytical,.from your description, does not sound like a silver bullet, since, as I understood it, it evaluates the coverage of each primitive individually, but does not precisely consider the effect of how different primitives oclude each other. I'm assuming it tries its best to make an educated guess there, but it still not 100% mathematically correct, even though I suppose the error is negligible for IQ, specially at modern resolutions.

What do you mean by primitives occluding each other ? Transparency ?

The paper admittedly does brush aside the problem of transparency and intersecting geometry but they mention a possibility to extend those cases with the depth peeling technique in their prior work in one of the slides ...

In terms of quality, an analytic rasterization pipeline is practically equivalent to ground truth results so the only limit to that would be the numeric precision used. Even 256 samples per pixel (256x MSAA) with the standard pipeline will struggle to produce clean results compared to analytic rasterization ...
 
What do you mean by primitives occluding each other ? Transparency ?
I guess he means there are still failure cases in the process to get all triangles that affect a pixel.
How do they do it, in the paper you have listed? (I tried to read but i'm blocked from the thought 'nah... sounds too expensive to be worth it', and after that it's hard to focus... :D )

EDIT: Got it, indeed they bin all triangles to each pixel. Completely unpractical. And even if we replace this with some MSAA trick to get, say max 16 triangles per pixel, then clipping those triangles against each other to find exact area is nonsense.
When i got my first PC ant Watcom C, first i did was making rotating cube and then i eliminated overdraw by clipping each triangle with each other, producing much more trinagles but no overdraw. Back then i felt clever, but not anymore :) Doing this per pixel really is crazy, and they still have moire.

I wonder how much it would help if we could jitter the sampling point individually per (sup)pixel in hardware.
 
Last edited:
What do you mean by primitives occluding each other ? Transparency ?

The paper admittedly does brush aside the problem of transparency and intersecting geometry but they mention a possibility to extend those cases with the depth peeling technique in their prior work in one of the slides ...

In terms of quality, an analytic rasterization pipeline is practically equivalent to ground truth results so the only limit to that would be the numeric precision used. Even 256 samples per pixel (256x MSAA) with the standard pipeline will struggle to produce clean results compared to analytic rasterization ...

Nevermind. I had not taken the time to take a look at the paper, but I was going off of your simplified description and my lousy interpretation of it on top of that. Now with JoeJ's post I see how it handles occlusion. It does clip every triangle against each other. Huh... Interesting. Obviously too expensive, but still an interesting concept, if only as a thought experiment or theoretical benchmark.
 
indeed they bin all triangles to each pixel.
...not sure if i got this right. It's interesting to relate the 'hopeless task' of proper AA to the visibility problem. It all looks much less nonsense if we try to solve both problems to achieve hidden surface removal for a speedup, and AA for better IQ.
If we do this triangle clipping at once for all triangles (not per pixel because it's GPU friendly or what), we get AA pretty cheap because occlusion is already resolved. If they did so in the paper then let me apologize.
Though, this has been tried in software rasterizers, and even for low poly Quake it did not work out and an additional ZBuffer for dynamic objects was used.

Still, maybe the idea remains interesting even today - either in HW or SW. But there are counter arguments, and all of them increase over time:
* What does it help to have awesome AA for triangles, while our larger problem is to represent stuff with those flat edgy bastards in the first place?
* Because of that, we make triangles as small as we can, and so we will end up at point splatting being just faster, making former efforts on triangles obsolete. (Or we go ray / sphere tracing and say goodbye to rasterization completely).

TAA really seems future proof, because it does not care about those things and works regardless. That's a big plus.
The other argument is it utilizes temporal accumulation, which is good, even if those that dislike its artifacts also dislike the concept in general.
But we really want it. The big waste in rendering animation is that we compute the same pixel every frame, even if it has not changed much from the (projected) previous state. So any game rendering is redundant for 90%, maybe?
I really like how TAA utilizes this waste to increase IQ. Makes total sense, and i hope it will be further improved until everybody is happy.

I can also imagine AA won't be necessary at all if we move to something prefiltered, like having thin volumetric shells on the triangles. I think about this for a long time, and recently it seems more attractive again because it could help with compression and storage issues eventually...

The only example i know is this:


It has still artifacts on mip level switch it seems, but there is no hard edge crawling.
 
My knowledge on the topic was never that detailed, and I admit I uderstood the two terms to be interchangeable until your distinction right now.
I thought analytical AA was more of a description of a (for lack of a better word right now - im drunk) a philosophy than an actual well defined algo.

Knowing that, distance to edge is still already a hell of an improvement over what we have now. Also, even analytical,.from your description, does not sound like a silver bullet, since, as I understood it, it evaluates the coverage of each primitive individually, but does not precisely consider the effect of how different primitives oclude each other. I'm assuming it tries its best to make an educated guess there, but it still not 100% mathematically correct, even though I suppose the error is negligible for IQ, specially at modern resolutions.

"Perfect" AA should be an impossible task beyond sampling one platonic solid (a shape that doesn't self intersect from any view) per pixel, thus all contiguous samples can be considered coplanar (and even then you have to take the exact shape of the solid into question and correct). Which is a complicated way of saying, other than mipmapping you either start introducing error more and more, and/or start exploding memory costs for prefiltering with each dimension you add.

Consider two triangles, one red one blue, in the same pixel but with different normal and different depths. To correctly shade the pixel you need to sample the red triangle, then shade according to normal and depth (and brdf) then sample the blue triangle, sample its lighting separately the same as the red triangle as the lighting for each triangle could be completely different, and then combine the two signals into one "pixel". Even analytic aa isn't "correct" as you can't combine the signals *before* shading unless you start paying that pre-filtering cost (less, error, more memory cost).

Ultimately, to match "reference" rendering you need 256 samples per pixel. To match what a camera sees, well the highest end ones put out a linear 16bit signal, thus 64k samples per pixel for perfect "photorealism" as each of those photons potentially came from a completely different pathway. Which is why I appreciate TAA and don't think it's going away anytime soon. We're not even getting close to reference rendering in the near future, but at least with TAA we can get a few more samples that might be "close enough" to correct.
 
Last edited:
Ultimately, to match "reference" rendering you need 256 samples per pixel. To match what a camera sees, well the highest end ones put out a linear 16bit signal, thus 64k samples per pixel for perfect "photorealism" as each of those photons potentially came from a completely different pathway.
If only cameras (or film, then scanned) were even remotely close to this ideal!

Mr Mars' Camera Resolution Diatribe (warrenmars.com)

We have this old thread:

"Pure and Correct AA" | Beyond3D Forum

which, sadly, has lost lots of images.

To do what you describe, you would have to clip "every" polygon against every other polygon that was passing though the pixel to obtain the truly visible regions of each polygon. After that, you might still have to resort to some form of sampling (within each region) for texture/shading calculations.

Finally, convolving those small areas with something other than a box filter would also be challenging.

I think you're 100% correct in saying "performance would be agonising".:???:

This is one of my all time favourite posts on B3D:

There is actual experimental evidence of aliasing in human vision using experimental lenses constructed from wavefront analysis that corrects for all of the imperfections in the eye, not including the cornea, lense, vitreous humour, shape of retina, etc There have also been experiments using adaptive optics (ala Keck telescope). These sorts of lenses can correct human vision down to 20/6. I remember reading a paper years ago in which a professional baseball player with unaided vision of 20/12, was analyzed and given custom engineered contact lenses which took his vision to 20/6. At this level, the subject reported aliasing effects in his vision (stairstepped edges, etc)
 
Consider two triangles, one red one blue, in the same pixel but with different normal and different depths. To correctly shade the pixel you need to sample the red triangle, then shade according to normal and depth (and brdf) then sample the blue triangle, sample its lighting separately the same as the red triangle as the lighting for each triangle could be completely different, and then combine the two signals into one "pixel". Even analytic aa isn't "correct" as you can't combine the signals *before* shading unless you start paying that pre-filtering cost (less, error, more memory cost).

Analytic shading can be done via exact evaluation of convolution integrals in one of the author's previous paper which goes on to serve as a basis for non-sampled anti-aliasing. The author also argues that their results are comparable to the reference results which usually involves making the standard rasterization pipeline evaluate many samples approaching to infinity ...

Ultimately, to match "reference" rendering you need 256 samples per pixel. To match what a camera sees, well the highest end ones put out a linear 16bit signal, thus 64k samples per pixel for perfect "photorealism" as each of those photons potentially came from a completely different pathway. Which is why I appreciate TAA and don't think it's going away anytime soon. We're not even getting close to reference rendering in the near future, but at least with TAA we can get a few more samples that might be "close enough" to correct.

Even 256 samples per pixel isn't good enough when moire pattern artifacts (spatial aliasing) still visibly show up as detailed in analytic visibility. 256 samples might be good enough to eliminate any visible geometric aliasing but it's not effective against other sources of spatial aliasing. Doing just a couple hundred samples won't give you the 'exact' visibility, analytic visibility on the other hand will net you the 'exact' visibility which involves computing boundaries for the visible regions of the primitives ...

I sincerely hope that TAA or any other temporal techniques doesn't ultimately end up being the inevitable future because art pipelines would be simpler without them which translates to a productivity advantage. I just hope that the industry's exploration of that area will stop altogether which was mostly a result of finding workarounds or hacks for the limitations behind older, inferior, and weaker systems. I truly believe that TAA is a regression in the productivity of art pipelines ...
 

Because now we're stuck in an never ending task of fine-tuning and maintaining a solution that won't work in a potentially infinite amount of cases and constantly refitting your solution to solve different cases isn't a productive use of time. I really hope that TAA will ultimately just end up being on the wrong side of history and that we'll eventually forget about it in the coming next generation. Art pipelines shouldn't be handicapped with such hacks in an ideal future and should evolve towards ease of authoring more content ...
 
Because now we're stuck in an never ending task of fine-tuning and maintaining a solution that won't work in a potentially infinite amount of cases and constantly refitting your solution to solve different cases isn't a productive use of time.
You mean like placing probes, fill lights, even portals manually, while keeping draw distance limited, watching at triangle counts, number of shaders, RAM usage, etc, etc, etc...
I guess limiting TAA artifacts is mostly work of few programmers and some technical artists, and it's pretty much nothing in comparison to the above? What exactly puts a limitation on content authoring?
I really hope that TAA will ultimately just end up being on the wrong side of history
AA may change, but temporal accumulation is not wrong, while ignoring available data is. So TAA will be mostly remembered as progress in the future, not as a failure. I doubt the concept will vanish completely from image processing.
 
Really ? We know such (uniform?) hysteresis is wanted scene-wide?
Not sure if that's an argument, because real life is a gradual process of smooth change too. Would you call the sunset a hysteresis, just because it happens over some period of time?
Would you call any physics simulation we do wrong too, because it works by taking a previous state and integrating the change of a timestep? Eventually improving cached contact forces over multiple frames? Surely not.

So why is TAA different here, why is it bad or wrong, although it works the same way?
The only answer can be subjective perception of error. But the success of TAA implies only a minority is affected. Still that's a problem, so what would you propose as an alternative?
 
AA may change, but temporal accumulation is not wrong, while ignoring available data is. So TAA will be mostly remembered as progress in the future, not as a failure. I doubt the concept will vanish completely from image processing.

Heck, why not just concentrate on making TAA better. Epic's goal for their new TAA upsampling is to have the default values work for all content. I imagine AMD's Super Resolution will be much the same.

And it's not like progress isn't being made on quashing artifacts. Resident Evil Village looks pretty damned amazing, and they're temporal upsampling to 4k on next gen. TAA just solves too many kinds of aliasing too well not to like.
 
Back
Top