The Perfect Anti-Aliasing

NocturnDragon said:
Wouldn't analytic antialiasing actually be "perfect antialiasing"?

Yeah except I doubt you can solve every problem analytically well atleast not without some difficulties ( say shaders with conditional iterations which are also effect by the screen space position and has alpha kills ).
 
DudeMiester said:
What about calculating the amount of the framebuffer pixel the pixel you're drawing will cover, and blend it into the framebuffer pixel based on the percentage of it covers. Then have automatic depth sorting to ensure everything is blended properly, via some kind of per pixel linked list of depth layers. Of course, this does nothing for AA within the objects, but you get really nice edges. :LOL:
A form of this has been attempted already. You can't just blend based on coverage percentage though. You can only blend with colors covered by the fragment. So doing this without discrete subpixels would be very difficult and probably not worth the effort.

Edit: Reverend, do you have thoughts on this or are you just trying to start a discussion?
 
glw said:
I'm a big fan of the accumulation buffer.

For practical purposes 16 samples works well but for stills I
tend to use 64+ samples, and have various different tweaks
that I use. There are diminishing returns though and 64 samples
ain't fast. Roll on R520/NV50! :)

e.g. (rendered on a 6800):

http://idisk.mac.com/glwebb-public/antialiasing/anti-aliasing_comparison_01.jpg

I've posted this over-the-top example before (rendered on an R300):
http://idisk.mac.com/glwebb-Public/antialiasing/tank_jrsg_256x06AA_1280x960_01.jpg

Very nice. :)
In the tank example wouldn't the edge mask help in speed?
Just render it normally first to get the z then write edges to stencil or z and use it as a mask for rest of the passes. (it might be ok to write edges with lines with 2 or 3 pixel width)

it might help in the other image as well, but in that case you really notice the quality improvement even in the shader, so doing edge only might ruin the look.
 
akira888 said:
As for AA by using coverage as a blending factor I seem to remember Verite 2200 doing something very similar (released in summer 1997) with VQuake, among some other applications perhaps. This idea fell out of favor though, the need to sort by depth rather than texture state being one of them.

Verite did edge aa.
Very good edge quality, even 16 samples is not as good in current tech.
but in addition to depth sorting, it is very bad with corners of triangles and thinner than pixel triangles, which makes it unusable for most of the games.
 
3dilettante said:
There would need to be a higher resolution display to get "perfect" anti-aliasing, or at least one that would be imperceptible to the human eye.

According to this:

http://graphics.cs.ucdavis.edu/~staadt/ECS289H-WQ02/notes/VR_Human_Factors.pdf

It would take a standard (~20 inch I think) monitor at a resolution of 4800 x 3840 to match human visual acuity at 2 feet.

That sounds like a decent starting point.

20" with what kind of dot pitch and video bandwidth? AFAIK even high end 21" CRTs (20" viewable) can give up to "real" 1600*1200; anything above that is starting to get "stretched". I know it's just a theoretical figure (link doesn't open for me right now), but even if it would be possible right now isn't that resolution way too high for that viewable space (I've tried gaming in 2048*1536 and it's damn hard occassionally to aim at opponents in the far clipping distance)?

If I divide 4800*3840 by 4, I get 1280*960; does that imply that I get theoretically close to any supposed optimum in 1280*960 with 4x rotated or sparsed grid -whatever? (yes these are honest questions).

Reverend,

Layman answer: give me the texture quality of 16xSSAA + 16xAF with the performance penalty of 2xMSAA/16xAF and I'd be a happy champ right now (or else something I can use in 1600*1200 with ~15-20% performance penalty). Ideally with a 16*16 EER, but I could easily live even with 4*4 in that case and no it doesn't have to be supersampling at any price; better texture filtering algorithms (whatever that would be) would most likely help.

Through the years (and after several warnings and according experiments afterwards) I've found out that my eyes after all are more sensitive to texture than polygon edge aliasing.
 
But if you REALLY want perfect anti-aliasing, and you wernt concerned about efficiency then, you would have a pixel sample per texel in the scene, plus an addaptive sample that meets a threshold of 3% colour difference, in sample....

All information would be sampled into anti-aliasing.

admittedly, this would be silly for real time... possibly up to 100x sample in one pixel. but hey... it would be as good as pixars movies !!
 
Sounds like new technology is on the horizon. Maybe in addition to blending edges, jittering the colors using pixel shaders as well. I am too tired to think beyond that point.
 
kyetech said:
But if you REALLY want perfect anti-aliasing, and you wernt concerned about efficiency then, you would have a pixel sample per texel in the scene, plus an addaptive sample that meets a threshold of 3% colour difference, in sample....
That would not be "perfect" - just, heuristically, very good.

I'm not trying to single you out - quite a few others have posted similar comments.

Unless you can analytically pre-filter your data, taking samples only means that you get a result that is probably close to the correct answer.

I know I'm being pedantic but Reverend did ask for "perfect AA". :?
 
Simon F said:
kyetech said:
But if you REALLY want perfect anti-aliasing, and you wernt concerned about efficiency then, you would have a pixel sample per texel in the scene, plus an addaptive sample that meets a threshold of 3% colour difference, in sample....
That would not be "perfect" - just, heuristically, very good.

I'm not trying to single you out - quite a few others have posted similar comments.

Unless you can analytically pre-filter your data, taking samples only means that you get a result that is probably close to the correct answer.

I know I'm being pedantic but Reverend did ask for "perfect AA". :?

Its good enough for me then :p

but seriously, if you have every clour value of the scene in the rendered information.... then what more is there needed to sample? forgive me if I sound silly :oops:
 
kyetech said:
but seriously, if you have every clour value of the scene in the rendered information.... then what more is there needed to sample? forgive me if I sound silly :oops:

You cannot get every color if there is infinite details. Like a quaternion fractal or just a simple mandelbrot as a shader on a polygon.
 
jlippo said:
kyetech said:
but seriously, if you have every clour value of the scene in the rendered information.... then what more is there needed to sample? forgive me if I sound silly :oops:

You cannot get every color if there is infinite details. Like a quaternion fractal or just a simple mandelbrot as a shader on a polygon.

Ah, well in that case, use a 10000x10000 stochastic sample per pixel :devilish: :)
 
kyetech said:
jlippo said:
You cannot get every color if there is infinite details. Like a quaternion fractal or just a simple mandelbrot as a shader on a polygon.

Ah, well in that case, use a 10000x10000 stochastic sample per pixel :devilish: :)

Yes, it could fool someone with an untrained eyes... ;)
 
jlippo said:
kyetech said:
jlippo said:
You cannot get every color if there is infinite details. Like a quaternion fractal or just a simple mandelbrot as a shader on a polygon.

Ah, well in that case, use a 10000x10000 stochastic sample per pixel :devilish: :)

Yes, it could fool someone with an untrained eyes... ;)

Albeit stochastic that's 100M samples. Can the human eye really distinct a difference in an image or motion picture (in a decent resolution) between 100M and way less samples? You are joking aren't you?
 
ka·lei·do·scope,

ATI's new AA method using the unused pixels generated from the pixel pipeline when they fall off the edge of the rendered polygon. Since this only happens on the edges those pipelines can be used to process AA blending with what is currently in the frame buffer using the pixel shader vice using MSAA or in combination with MS. :LOL: yeah right. . .

Webster dict:

1 : an instrument containing loose bits of colored material (as glass or plastic) between two flat plates and two plane mirrors so placed that changes of position of the bits of material are reflected in an endless variety of patterns. . .
2 : something resembling a kaleidoscope: as a : a variegated changing pattern or scene <the lake a kaleidoscope of changing colors -- Robert Gibbings> b : a succession of changing phases or actions <a... kaleidoscope of shifting values, information, fashions -- Frank McLaughlin>
 
kyetech said:
No I seriously think you need 100M samples per pixel..... :oops: ;) LOL

I obviously didn't mean a 10 by 10 pixels render target either when I said decent resolution. Quite some time ago I said that I did notice minor occassional aliasing in Pixar's Finding Nemo as an example. Given that they use AFAIK 64x sample stochastic and probably an HDTV resolution, I find it hard to believe that I'd still see a difference with let's say a 10x times higher sample density, unless I wear binoculars in the movies.
 
3dcgi said:
Edit: Reverend, do you have thoughts on this or are you just trying to start a discussion?
I do have thoughts on this (based loosely on correspondences with JC... has to do with a combination of hardware and pixel shader work) but I am not ready to (nor am I sure I am the liberty to) discuss this. After all, expensive (3D-wise) stuff is usually ignored.
 
Ailuros said:
3dilettante said:
There would need to be a higher resolution display to get "perfect" anti-aliasing, or at least one that would be imperceptible to the human eye.

According to this:

http://graphics.cs.ucdavis.edu/~staadt/ECS289H-WQ02/notes/VR_Human_Factors.pdf

It would take a standard (~20 inch I think) monitor at a resolution of 4800 x 3840 to match human visual acuity at 2 feet.

That sounds like a decent starting point.

20" with what kind of dot pitch and video bandwidth? AFAIK even high end 21" CRTs (20" viewable) can give up to "real" 1600*1200; anything above that is starting to get "stretched". I know it's just a theoretical figure (link doesn't open for me right now), but even if it would be possible right now isn't that resolution way too high for that viewable space (I've tried gaming in 2048*1536 and it's damn hard occassionally to aim at opponents in the far clipping distance)?

If I divide 4800*3840 by 4, I get 1280*960; does that imply that I get theoretically close to any supposed optimum in 1280*960 with 4x rotated or sparsed grid -whatever? (yes these are honest questions).

I don't think that would work, since even if the framebuffer was at the larger resolution, everything would get stuffed back into the lower resolution of the screen. On one hand, this means that perfect antialiasing is impossible if the screen is restrictive. On the other, that means video cards can stop at some limit knowing further work is pointless.

The example given in that pdf states that the human eye can distinguish fine detail one hundredth of an inch in size from a distance of three feet. Whether this will match the user's visual acuity depends on whether their situation is "typical". I don't know how they defined that.

At two feet distance on a 20 in. (horizontal) screen, the calculations suggest that a scanline be 5429 pixels in width to match human visual acuity. I think that means that at higher densities, the human eye would percieve multiple pixels as a single point.
 
Back
Top