The Perfect Anti-Aliasing

Reverend said:
What would that be?
Too many variables.

What display device are you using? Since the "light image" is reconstructed with this display, from the rendered pixel values, you must take it into account when computing those values.

What would that take?
It would take prefiltering of your object data so that it was (at least) below the Nyquist limit (minus a bit to allow for imperfect reconstruction by the display device) before sampling at the pixel level.

Unfortunately, prefiltering of the object data is, well, probably infeasible.

Supersampling of the original object data is not a solution for perfect AA because edges in the data => infinite frequencies => you need infinite number of samples.

At, say, 1024x768, what would that be?
A practical (near-but not entirely perfect) AA would probably get away with 128x stochastic (i.e. Poisson disc) supersampling with, say, a windowed sinc function filter. For your example, that'd be, say, 100Million samples.

How can shaders help?
Pass.
 
I'm thinking 1024x768 is a little low. I think the upper-middle folks are mostly higher than that. I certainly run higher than that, and the fixed pixel folks are mostly stuck higher than that. I wouldn't be aiming at 16x12 yet, but possibly by the time you have the other pieces in place it would be 16x12 you'd want to be shooting for.

Or, if you really wanted something au courant, you'd shoot at 1280x720 for the HD folks.
 
Simon F said:
Supersampling of the original object data is not a solution for perfect AA because edges in the data => infinite frequencies => you need infinite number of samples.

In principle, yes, but due to the rather poor accuracy of human eye, there is some limit in the amount of supersampling that gives as perfect image as a human eye can receive. Since this ultimate sampling limit depends on the resolution (both spatial and color) of the display device, and there was a defined resolution in the original question, I suppose this is the sampling limit that was asked.

Unfortunately, I have no idea how much supersampling is enough for 1024x768 + 24bpp + average human vision. But I bet this can be calculated with Fourier optics + some experimental data of human perception.
 
Couldn't they subvert this problem by enabling higher resolutions? I would think a resolution of 3200x2400 should be sufficient to the naked eye, and at the same time they can concentrate the extra horsepower (i.e. no FSAA) in other areas. Just a wild thought. :devilish:
 
RingWraith said:
Couldn't they subvert this problem by enabling higher resolutions?
Unfortunately, these monitor making people are not going to be forced into that anytime soon.


Anyway, if we're working with some sort of tiled renderer, I would think that SSAA be a good option. It consumes more processing than MSAA, but it also doesn't break any effects. Plus, in a tiled system, the extra memory required could all be in cache instead of doubling, quadrupling or whatevering framebuffer demands from RAM.

Simply 4x SSAA would be sufficient for most people. Ideally we'd want much more advanced tech, but graphics is all about spending power where it will be most noticeable.
 
It's too bad that the most important patents with respect to AA are held by Pixar. Simon is right that a stochastic sampling with a sinc (or truncatated sinc) function is the way to go (a catmull-rom in Renderman land).

I wonder when the patents expire, and what other offline rederers do...
 
Tiler or not, doing super sampling on every shader for every pixel is a waste of processing power. Perhaps if SSAA could easily and efficiently be toggled on the fly as needed, it could be a decent solution.

MSAA works well (when there's no alpha tests or such), but I am hoping to see a smarter solution. Consider a sphere consisting of thousands of tiny polys. MSAA will anti-alias every internal edge on the surface, when actually just the outer edges (the sphere's horizon) need it. (I could be wrong here, I'm not sure on all the details of how multisampling works.)

(edit)
Anyway, I'm not overly concerned about AA for now. IMO, just 2x AA at 1024x768 makes edge aliasing pale compared to many other artifacts in today's games.
 
ssaa has pretty good quality, but it takes huge amounts of memory and so does msaa.

Something like Lightwaves antialias might be nice.
Simple accumlation buffer, perhaps with stencill bit that tells what needs more work, after z-first pass render as many passes as you need with some jitter.

Constant amount of memory with as much samples as accumlation buffer precision allows.

Its slow, but should work like a charm in puzzlegames.. ;)
 
What about calculating the amount of the framebuffer pixel the pixel you're drawing will cover, and blend it into the framebuffer pixel based on the percentage of it covers. Then have automatic depth sorting to ensure everything is blended properly, via some kind of per pixel linked list of depth layers. Of course, this does nothing for AA within the objects, but you get really nice edges. :LOL:
 
Unfortunately, these monitor making people are not going to be forced into that anytime soon.
I don't know about that what with the whole UHDTV (7680x4320x60fps) thing. I'm sure when that becomes a standard, there will surely be an Extended-UHD spec coming up. Then when that's ratified, there'll be a HypereXtended-UHD spec... And then all our bandwidth will disappear for good, especially with all the bittorrent downloads of HyperX-UHD episodes of Naruto. :D

Joking aside, I think the moment you start putting the word "perfect" into the problem, you have to think not only about the results, but the implementation. Sure supersampling to hell would work, but implementing it in hardware or software would probably make it look fairly ludicrous in practice. The quality increase with respect to (internal) resolution increase drops pretty fast as resolution keeps going up. I also think that stochastic AA is not very feasible unless your primary samples are driven by raycasts.

In addition, raw supersampling and box filtering on the downscaling is sacrificing sharpness(where sharpness is needed) for the sake of preventing pixellation. Better downscale filtering than simple box filtering would be in order. Box and piecewise linear filtering creates its own aliasing issues because of the axial bias. Technically, stochastic sampling gets rid of this bias, but again the practicality thereof is... well...
 
DudeMiester said:
What about calculating the amount of the framebuffer pixel the pixel you're drawing will cover, and blend it into the framebuffer pixel based on the percentage of it covers. Then have automatic depth sorting to ensure everything is blended properly, via some kind of per pixel linked list of depth layers. Of course, this does nothing for AA within the objects, but you get really nice edges. :LOL:

"Automatic depth sorting" - exactly what to you mean by that? Sorting every triangle in the scene is impractical for a number of reasons, but a per-object sort with relatively compact objects should give a good enough approximation, given that we most likely don't care about the depth order of non-intersecting (in screen space) triangles.

As for AA by using coverage as a blending factor I seem to remember Verite 2200 doing something very similar (released in summer 1997) with VQuake, among some other applications perhaps. This idea fell out of favor though, the need to sort by depth rather than texture state being one of them.
 
just doubling the X and Y resolution more or less gives the result of 4x ordered grid super sampling (with human eyes blending the samples instead of the video card). As far as AA solutions go, I don't think you can do much worse than 4xOGSS.
 
I'm a big fan of the accumulation buffer.

For practical purposes 16 samples works well but for stills I
tend to use 64+ samples, and have various different tweaks
that I use. There are diminishing returns though and 64 samples
ain't fast. Roll on R520/NV50! :)

e.g. (rendered on a 6800):

http://idisk.mac.com/glwebb-public/antialiasing/anti-aliasing_comparison_01.jpg

I've posted this over-the-top example before (rendered on an R300):
http://idisk.mac.com/glwebb-Public/antialiasing/tank_jrsg_256x06AA_1280x960_01.jpg
 
Back
Top