Did anyone ever try uniting Wu AA and Sample-based AA?

Arun

Unknown.
Moderator
Legend
Hello everyone,

Quick question here...
Did anyone, ever, try to unite Wu Antialiasing ( also known as Analactycal Antialiasing ) with Sample-based Antialiasing?
If so, could you give a link or a brief explanation of the idea?

I've personally been trying an idea which sounds quite interesting, but I'd be surprised if no one thought of it before me. I'd hate to do something someone else already did :)


Uttar
 
Analytical AA will always have problems with IMR's.

Of course, you may be able to combine it with sample-based AA such that the sample-based tech ensures a "minimum AA quality" regardless of the rendered scene.

But, off hand, it seems that it's really too complex to be worth it. Not that it couldn't be done, but it would probably be easier to just implement better sample-based techniques.
 
Chalnoth said:
Analytical AA will always have problems with IMR's.

Of course, you may be able to combine it with sample-based AA such that the sample-based tech ensures a "minimum AA quality" regardless of the rendered scene.

But, off hand, it seems that it's really too complex to be worth it. Not that it couldn't be done, but it would probably be easier to just implement better sample-based techniques.

Eh, if only everyone would have thought like you and I'd be the first one to think about that thing...
I didn't implement it yet, so it could be a gigantic failure, but the more I think about it, the more it makes sense.

It isn't about maintaining a minimum AA quality, either. And yes, it does store everything as samples, so there shouldn't be any Z-related problems.

It might cost a fair bit of transistors in Triangle Setup, though - but I think it might be worth it. You don't even really need real RGSS with it anymore, so you also save some transistors on that front. Not sure how important that is.


Uttar
 
Is it anything like this, except with the color weighting retirement issue fixed for transparency?

The handling of transparent edges could be easily broken in what I outlined, unless it also retired completely covered triangle edge samples by updating the color and transparency weighting information of the "covering" sample at the same time as it decides not to add the covered color as a distinct triangle sample. Then the limit of 4 samples I proposed should still retain the advantages of analytical AA even when more than 4 primitives are involved, even in situations of transparency.

It should still always be more successful than 4x sampled AA since the error for non transparent edge interaction wouldn't ever be any worse than for a 4 (traditional) sample solution.

If you see any flaws in my approach addressed by your idea, or it would serve explanation to contrast against it, could you explain the idea in the context of some of the things I propose there (if it isn't too jumbled to follow).
 
Demalion: Not really anything similar.
My method isn't revolutionary. It's evolutionary. I still *kinda* use "samples"

It might be rather easy to add this system to a MSAA/SSAA architecture.

As for where the cost for this method lies... It's all about putting in more transistors for Triangle Setup.

I'd like not to explain what I'm doing exactly just yet, because I'm not quite sure it's all that good. I didn't finish implementing the algorithm yet, and the algorithm right now is rather expensive per sample ( not in a memory way, but in a calculation way ) ...
I've got a few ideas to optimize it, but I'm not sure they'll work...


Uttar
 
Uttar. You say you'd hate to do something someone else already did, but if you don't explain your idea how can anyone know if it's already been done? Anyway, you've got me curious so please post the idea when it's ready.
 
How are you going to deal with storage? Lets say you want optimal quality with up to 4 surfaces per pixel ... without dynamically allocated storage you are looking at 12 edge equations per pixel.
 
MfA: I'm going to deal with storage using samples.

Okay, so as 3dcgi said, it's hard to know whether someone already thought of it if I don't say what it is :) So let me explain.

In current MSAA/SSAA approaches, you've got a specific amount of samples on a grid. The position of those samples are *static* - and thus, they often are very good at some angles and less good at others.

My solution is to add a little something: cheating.
I'd use Analyctical Antialiasing on a *per pixel* basis ( not per sample ) to determine coverage. Then, I'd use that info to get the most correct number of samples, and in the best possible position too.

The result? You're going to use the four samples a lot better, and it should look as good in all situations - that means it could, for example, beat ATI's RGSS patterns in their bad cases.

As I said, it's not revolutionary. But I believe it is a most interesting evolution.

So, do you know if anyone ever tried this?


Uttar
 
I don't know if anyone tried this, but I have a question: once you determined a "best pattern" for a primitive, what would you do if the best pattern is very bad for next primitive on the same pixel?
 
pcchen said:
I don't know if anyone tried this, but I have a question: once you determined a "best pattern" for a primitive, what would you do if the best pattern is very bad for next primitive on the same pixel?
I investigated such a scheme some time ago and found it was fraught with problems.

Probably the 'best' published approach that works along similar lines is the "Exact" algorithm (invented, I think, by Schilling), but even that has flaws according to SGI.
 
Uttar said:
My solution is to add a little something: cheating.
I'd use Analyctical Antialiasing on a *per pixel* basis ( not per sample ) to determine coverage. Then, I'd use that info to get the most correct number of samples, and in the best possible position too.
So how would you keep track of where you put each sample? You need to keep track of this data else intersecting polygons won't look correct.
 
OpenGL guy said:
Uttar said:
My solution is to add a little something: cheating.
I'd use Analyctical Antialiasing on a *per pixel* basis ( not per sample ) to determine coverage. Then, I'd use that info to get the most correct number of samples, and in the best possible position too.
So how would you keep track of where you put each sample? You need to keep track of this data else intersecting polygons won't look correct.

Let me insist on a part of my quote:
and in the best possible position too


The idea is that you're pretty much got an ordered grid. Although it's more logical to see all of this is as "rectangles" instead of "points" in this case, but I guess that doesn't have much importance.

The real hard part in the whole algorithm is "getting the best possible position" - it seems the only way to get that is the brute force approach, and that's frankly quite expensive.

My guess is that for a real-time system, you wouldn't care too much if it isn't *exactly* the very best position for the samples - a good one might be sufficent. Or anyway I hope so, because otherwise I'm going to suffer at implementing this efficiently...


Uttar

EDIT:
P.S.: Sorry, sounds like I didn't exactly understand your question at first.
Using this algorithm, you could still describe a mode as "4x Antialiasing" or "9x Antialiasing" - the 4 & 9 still represent the number of Color & Z values. Both Z Compression and Color Compression are thus still useful.

The method does have a disadvantage, it wouldn't work very well with a number of sample which can't be divided equally between horizontal and vertical ( 6x AA or 8x AA, for example ) - it would work, but I fear there might be some minor quality problems.

The algorithm goal is to simply say "bye-bye" to bad angles. Sure, with ATI's 4x AA grid for example, you don't have a lot of them.
But as the number of samples increase, you begin to have to do more sacrifices: if you want to get excellent quality with the most regular angles, you're not going to benefit from the high number of samples for rarer angles.

The goal of this algorithm is to get maximum quality with all angles. That's all.


Uttar
 
I think what OpenGL guy means is how does the second triangle know what sample points to use for the z test? It is possible for each triangle to have different ideal patterns, but the same pattern must always be used for a pixel. Do you plan to have a limited number of possible patterns stored in a table so you store an index? These questions assume an immediate mode of operation.
 
I understood what he meant. Sorry if I wasn't very clear in my explanation, but I don't see any way to make it clearer.

I'll release a test program Sunday if all goes as planned, just to proof that it works.
The hard part is making it work fast I fear, though.


Uttar
 
3dcgi said:
I think what OpenGL guy means is how does the second triangle know what sample points to use for the z test? It is possible for each triangle to have different ideal patterns, but the same pattern must always be used for a pixel. Do you plan to have a limited number of possible patterns stored in a table so you store an index? These questions assume an immediate mode of operation.

Yes, and the problem then occurs that once you have chosen a pattern for a pixel that is 'ideal' for one primitive (i.e that has an edge crossing the pixel) it might turn out to be the absolute worst case for the next edge that crosses the pixel. You can't change your mind because you can't go back and resample the previous primitives.
 
I dont quite see what this has to do with Wu's line anti-aliasing or analytical anti-aliasing (not quite the same).
 
MfA said:
I dont quite see what this has to do with Wu's line anti-aliasing or analytical anti-aliasing (not quite the same).

Hmm, now that you mention it, it might not be the best way to say it.

The way it ressembles Wu Antialiasing is that the idea of Wu Antialiasing is of determining coverage ( using the error value - not exactly that here, since I use FP, but it's the same idea ) , then using that coverage to calculate a color value for the pixel.

Here, instead of using that to calculate a color value, you use that to calculate the number of samples.

Anyway, I can understand your answers. That's pretty much what I had expected, considering the strangeness of my idea. But talking of changing patterns and stuff isn't exactly correct. The idea here is that you're using OGSS in *all* circumstances. So that's not the problem.
You're using OGSS, and you're doing some nifty & evyl cheating :)

I sincerly think that all that has been mentionned in this thread doesn't mean my approach won't work as intended.
What I do fear, after reading it however, is that many optimization opportunities are simply bad for IQ.

And then the real problem arises: it's probably way too slow ( or expensive to implement, transistor-wise )
Ah well... I'll implement it anyway, and who knows, maybe a miracle will happen...


Uttar
 
Yes, it is worth looking into for those willing, but I still think that there would be too many special case scenarios for it to make much sense in a hardware implementation.
 
Uttar:

When you get the basic algorithm up and running, try this case: 2 polygons that share an edge, in front of 2 other polygons that also share an edge, such that the shared edges of the polygon pairs cross and a portion of the shared edge of the back polygon pair is in view. Try with both opaque and transparent polygons. If it works correctly (record a few frames with a little polygon movement and see what actually happens), you may be on to something.

The next logical step would then be 2 (non-convex) polygon meshes with one in front of the other, possibly intersecting each other as well. If you can make this work without glitches as well (which likely requires tweaking and detection/handling of various special cases), the scheme may be usable for production use.
 
Back
Top