Alternative AA methods and their comparison with traditional MSAA*

I really can't see that working, but I also honestly never even considered it. I think you really need to work on a native size buffer for the edge detection step at least. (It may make sense to use a downscaled buffer for sampling during the blur stage)

Regarding the luminance edge detection problems, grandmaster sent me some uncompressed images and I found what looks like confirmation here:
saboteur-PS3-025_maa.jpg

As you can see the edges are detected on the yellow background, but not the red one (which has a somewhat similar luminance to the tires).
Now, the interesting thing is that this aliasing is also quite hard to spot in the normal size shot. I guess it's not such a big drawback because we're better at seeing luminance aliasing.
 
That 'simple' technic is very intriguing, will be interesting to see in a better shot the real effect. If it will be so good it open interesting scenario on the ps3 development.
 
Now, the interesting thing is that this aliasing is also quite hard to spot in the normal size shot. I guess it's not such a big drawback because we're better at seeing luminance aliasing.

That's a cool point. The algorithm has trouble spotting it, and so do we. :smile:
 
I really can't see that working, but I also honestly never even considered it. I think you really need to work on a native size buffer for the edge detection step at least. (It may make sense to use a downscaled buffer for sampling during the blur stage)

Regarding the luminance edge detection problems, grandmaster sent me some uncompressed images and I found what looks like confirmation here:
saboteur-PS3-025_maa.jpg

As you can see the edges are detected on the yellow background, but not the red one (which has a somewhat similar luminance to the tires).
Now, the interesting thing is that this aliasing is also quite hard to spot in the normal size shot. I guess it's not such a big drawback because we're better at seeing luminance aliasing.

Ah, very interesting, that does kinda confirm it. I think a downsized buffer would work if you didn't muck with the numbers. So don't do a fancy downscale with averaging or whatever, perhaps just skip every other line. So maybe dma row 0, 2, 4, 6, etc to the spus and leave the data untouched. That would be a half sized buffer, but edges remain edges. It would be sloppier, but I think it would work. Maybe it's not necessary though, since their luminance buffer could be just one byte per pixel anyways, so sending the full luminance buffer might be just fine. Either way, using luminance is definitely an interesting idea. Cool :)
 
I really can't see that working, but I also honestly never even considered it. I think you really need to work on a native size buffer for the edge detection step at least. (It may make sense to use a downscaled buffer for sampling during the blur stage)

Regarding the luminance edge detection problems, grandmaster sent me some uncompressed images and I found what looks like confirmation here:
saboteur-PS3-025_maa.jpg

As you can see the edges are detected on the yellow background, but not the red one (which has a somewhat similar luminance to the tires).
Now, the interesting thing is that this aliasing is also quite hard to spot in the normal size shot. I guess it's not such a big drawback because we're better at seeing luminance aliasing.

Would be interesting to see in motion. As in that shot my eye is instantly drawn to all the aliasing, there's quite a bit actually. And that's just with a quick glance...

Usually things get far worse once you see it in action and can see the aliasing "crawl"...

I'll have to see if anyone I know with a PS3 will be picking this up. I absolutely LOVE when devs experiment with different ways of doing AA (enough that I unfortunately got a HD 2900 XT instead of an 8800 GTX. :???: just due to the new forms of AA they were trying out).

If I had a PS3, just the fact they are trying something different with AA would make this an instant buy for me. :)

Regards,
SB
 
Thinking about it some more, they must be using Z along with luminance, otherwise wouldn't their method blur any texture that had luminance variance within it, whether it was an edge or not? Then again, maybe that's why their game has that overall soft look to it.
 
This guy claims to be an ex-programmer for Pandemic and had this to say regarding the AA.

Zeenbor said:
It's like any other image-space AA filter out there, except it works off the luminance of the color buffer instead of the depth buffer to generate edges. The SPU version has more passes to generate better edge masks. Don't know if I can go in more detail than that.

Source: GAF
 
Would be interesting to see in motion. As in that shot my eye is instantly drawn to all the aliasing, there's quite a bit actually. And that's just with a quick glance...

Usually things get far worse once you see it in action and can see the aliasing "crawl"...


This guy claims to be an ex-programmer for Pandemic and had this to say regarding the AA.

That link was posted in the previous page.
 
Looking at it in motion, I think it's a real leap. It is fair to say that the general make-up of the game means that the 0xAA on 360 isn't at all ugly, but side-by-side with the PS3 version, it's amazing just how smooth this technique looks. It's a blend, not a blur, and it's hugely impressive with just a few "odd" artefacts.
 
Thinking about it some more, they must be using Z along with luminance, otherwise wouldn't their method blur any texture that had luminance variance within it, whether it was an edge or not? Then again, maybe that's why their game has that overall soft look to it.
Well, normally when you do some kind of morphological AA you are very conservative about what you consider an edge (the staircase patterns), so that shouldn't happen too often in textures, and when it does it's usually something you want to filter anyway. You can of course get sampling issues where lines that shouldn't be connected are connected and things like that (again with subpixel features).


Now, I'm not saying that screen-space AA techniques are a cure-all -- I've worked with them too much for that. There are a number of problems that have been lamented in the relevant literature since the 90s at least, and I'll try to summarize them here:

The one that's simple to see and has already been mentioned in this thread is whenever you have some feature that is sub-pixel size: you get exactly as much flickering as you would without any AA, since the edge detection doesn't have anything to work with.
This is a general problem with the method, but it doesn't decrease IQ below the previous state - it just doesn't improve it.

The second is harder to see (only in videos), but arguably a bigger inherent problem since it could decrease perceived image stability. What happens is that a slight (1 pixel) change can affect how a whole edge is interpreted. So what you get is a very different edge from one frame to the next, and maybe flickering back and forth between those states. (As opposed to only a single pixel flickering without any AA)

The third problem is not inherent to the technique, but only to the specific implementation used here. Since they use only luminance, they miss hue/saturation edges. (see my previous post) I think this is not a bad trade-off on current hardware, since I also had a rather difficult time finding those edges at native resolution. It's also a problem that could be solved easily at some additional cost by also looking for Hue/saturation edges.

Then there's a fourth problem with this particular version that I don't get at all: Them filtering the UI. This has some really ugly effects on stuff like circles and fonts and is completely unnecessary.

Anyway, the edge quality for sufficiently large feature sizes is still superb particularly considering the computational cost, if you can live with the minor drawbacks.
 
Can we have an indepth analysis or dev interview for this in near future ?
It'd be real interesting to hear about this blend technique & its cost.
 
This could be a must-have feature in the next generation of consoles though. Combined with at least 2x (but preferably 4x) MSAA, it could almost completely eliminate aliasing, which is one of the main differences between realtime and offline CG image quality.
The hardware engineers should pay very close attention to this issue so that they won't design an architecture that'd work against its efficient implementations.
 
Can we have an indepth analysis or dev interview for this in near future ?
It'd be real interesting to hear about this blend technique & its cost.

Going by what was said at Neogaf, I wouldn't think we'd get much more than our discussion here.
 
That's a cool point. The algorithm has trouble spotting it, and so do we. :smile:
Well it makes sense. Jaggies are a factor of contrast. If contrst is low, we won't perceive the colour transitions. The question I have is what edge detect they are doing and why they can't roll hue or RGB into the equation to catch colour aliasing?

There's also the question of the future of AA in hardware. This works so well, what changes should happen on GPUs to support advanced edge-dependent blends in future?
 
There's also the question of the future of AA in hardware. This works so well, what changes should happen on GPUs to support advanced edge-dependent blends in future?

The Siggraph 09 presentation from Yang @AMD on edge-detect and using shaders for AA (in addition to an MSAA'd buffer) is rather interesting.

Of course, the one factor that may or may not be prohibitive is just the number of edges in a given scene. Any analytical AA algorithm (Quad-A titles here we go!) here will compute as many edges as are needed; instead of needing more bandwidth, more maths.
 
There's also the question of the future of AA in hardware. This works so well, what changes should happen on GPUs to support advanced edge-dependent blends in future?
I think it's funny that there is suddenly such excitement about this.

For example, there is a '99 paper by Isshiki and Kunieda ("Efficient anti-aliasing algorithm for computer generated images") that describes a technique "suitable for low-cost hardware implementation". My algorithm intended for OpenCL implementation is based on this and somewhat similar, but actually a bit more complicated (and computationally intensive) to deal a bit better with more edge (haha) cases.
 
There are papers that go back farther into the early 90s too, not unlike many other techniques described (even earlier) in the literature but only now just being used in real-time interactive situations. :)
 
Am I right in assuming this summarises as "an AA solution that uses the CPU rather than GPU RAM"?

If so it's actually pretty fascinating. If not... wha?

Digitalfoundry has published an article comparing 2 versions of the game and I made the following reply to the post above. I am wondering if its accurate an assumption.

Might have something to do with cell being a CPU/GPU. Its possble this same method can work on dx10 or dx11 GPUs and i'd think the reason its not on the 360 is because xenon is a general purpose CPU exclusively whilst xenos hasn't the extra oomph or feature set.

It wont work on just any CPU basically or just any GPU for that matter. Just a guess.
 
Back
Top