Type of AA most beneficial for 360/PS3

I don't follow.
A physical pixel is a rectangle with extents.
But it really isn't. A better qualified man than me, Alvy Ray Smith, sums it up as...
Alvy Ray Smith said:

To determine the color of that pixel, finite samples should be taken to make the best representation of the pixel interior.
Obviously that is just an approximation to a solution. But I don't think we need to go into what would be needed for a correct solution (if one actually existed anyway).

Approximating the sum of all surface areas*surface colors inside the pixel's extents (compensated for gamma) is the target.
This is where you are going wrong. What is a pixel? It doesn't really have a hard boundary.
....
If you have multiple samples, rotated patterns, and later sparse (~n-queens) patterns are the most attractive because they can differentiate more edges that might run through the pixel at angles where the impact on the final color is large.
Agreed. A sparse pattern tends to shift the aliasing error from lower frequencies to higher frequencies which is less disturbing to the human visual system.
I can see the value in having multiple carfefully selected sample patterns to choose from on a per-pixel basis, but I'm certainly not a proponent of fully randomized sample grids. I don't see where I implied that.
Ahh. You misunderstood my, regrettably, facetious comment. You implied that a tent filter destroyed information (which I contest) and a box filter did not and therefore was better. Scrambling the resulting pixels does not destroy information either but is of no value. It was a throwaway comment on my behalf which has only confused issues.

Contrast ratio between neighbours is all that Quincunx can reduce.
...relative to the result of a box filter, but you need to ask yourself, "was that box-filtered result correct in the first place?".
If you hurl a filter on your entire screen it is not possible anymore to produce a sharp 100% edge between neighbours. But that's exactly what should happen (and does happen with MSAA, no matter the sample count) if e.g. the left pixel is fully covered by a white surface and the right pixel by a black surface.
To show you the error of your reasoning, try animating your test scene. You will soon see that, for an animated system (which is what we are interested in), such a black-white result is not desireable.
 
Quincunx was horrible! Equivalent jaggy removal to 4x AA at the cost of 2x (or so nvidia's PR said), but it also severely blurred the image.

Wasn't the most noticable issue with Quincunx that bitmaped text appeared blurred making it hard(er) to read.

And speaking of text, cleartype is a good example of situtations where AA using a box filter looks really bad in comparison.

Should one take the position of the colourparts of each pixel in consideration when going further with AA for 3D scenes? It would obviously only be practical to apply where the positions are know which lends it best to realtime applications.
 
But it really isn't. A better qualified man than me, Alvy Ray Smith, sums it up as...
"A Pixel Is Not A Little Square, A Pixel Is Not A Little Square, A Pixel Is Not A Little Square!"
And he's wrong. Sorry if this sounds too knee-jerk, but the display devices we use generate colored rectangular pixels, or they try really hard to do that (and come ever closer), and as that is the physical reality of pixels, it is these rectangular shapes that graphics algorithms must find the best possible color for. One and only one color per rectangle. That is all we have to work with in the physical world.
Pixels aren't round. Pixels don't overlap. Pixels also aren't points on an infinite continuum of signal processing ivory (note thought that if you made that same argument for textures, you'd be kicking in open doors).
My LCD displays have no way to display a blended color between two pixels, because there's no gap there that could have a different color. My last remaining CRT (an aperture grille model), even though it can't quite manage as well, also tries very hard to reach that ideal, not to mention all these physicalities are well supported by the scan-out logic on all my graphics cards as far as I'm aware (and my collection includes a Kyro 2!).
Quincunx is the exception. The black sheep. The silly idea. That which lies deep under the rug.

As for Mr Smith, most of the insights into graphics that came out of Microsoft in the mid-to-late nineties have already turned out to be much less than insightful. Like the optimal origins for coordinate systems (to be retracted in a much later revision of Direct3D). I'll proudly wear my skepticism to anything that associates with that era, and that place.
Best case he was just terribly confused about the subtle connections between texels, pixels and AA samples. Because what he said there, not least because of the categorical form he chose, is wrong.

Simon F said:
This is where you are going wrong. What is a pixel? It doesn't really have a hard boundary.
Yes it has. If you want to compute a color for a shape that bleeds into its neighbours, it simply does not jive with display devices that a)compose images out of colored rectangles and b)could themselves let colors bleed across neighbours but don't do that because they would be considered poor displays if they did.

If you wanted to do anything with cross-neighbour filters, it would have to be to negate imperfect separations in the display device (anti-Quincunx with the weights reversed, aka sharpen filters), not exacerbate them.

Simon F said:
...relative to the result of a box filter, but you need to ask yourself, "was that box-filtered result correct in the first place?".
Pretty much.
We're trying to determine the color of pixel 0;0. Whether or not we agree on the "rectangle" issue, why should a color that lies inside the boundaries of pixel 1;1 (whatever shape or size they may be) contribute to that at all? It is not there. It belongs somewhere else. Including it into the color of a shape its boundaries it does not even touch is irrational bling-bling.

And what's a box filter anyway? To make it nice and pointed, and to avoid mix-ups with texture filter/scaler terminology (where I wouldn't resist nearly as much against points and signals and frequencies and attenuations), I'd prefer "weighted average". That leaves the door open for maybe slightly lower weights for samples on the far outskirts of a pixel.

I postulate a duality where we construct (framebuffers or textures alike) grids of flush rectangles, but sample from grids of points, with distances between those points.
Simon F said:
To show you the error of your reasoning, try animating your test scene. You will soon see that, for an animated system (which is what we are interested in), such a black-white result is not desireable.
Err, yes, I insist that it actually is. The pixel edge will only be perfectly sharp when the surfaces perfectly coincide with the pixel grid. If it moves across such boundaries, the result will be the same old shades of grey, in accordance to the relative contributions of the surfaces to each pixel. That is no aliasing.
 
And he's wrong. Sorry if this sounds too knee-jerk, but the display devices we use generate colored rectangular pixels, or they try really hard to do that (and come ever closer), and as that is the physical reality of pixels, it is these rectangular shapes that graphics algorithms must find the best possible color for. One and only one color per rectangle. That is all we have to work with in the physical world.
IMHO you're looking at this problem in the wrong way. It's true that common display devices pixels are rectangular (spatially bounded) entities, but we use them to represent signals that, in the vast majority of cases, don't follow these rules. In a shader we don't render pixels, we fill pixels, that's the big difference imho. (waiting for Simon, Xmas and Marco to spank me.. :) )

Pixels aren't round. Pixels don't overlap.
Unfortunately we fill pixels all the time with stuff that is round and that overlaps.
 
Pretty much.
We're trying to determine the color of pixel 0;0. Whether or not we agree on the "rectangle" issue, why should a color that lies inside the boundaries of pixel 1;1 (whatever shape or size they may be) contribute to that at all? It is not there. It belongs somewhere else. Including it into the color of a shape its boundaries it does not even touch is irrational bling-bling.
Because it reduces aliasing, plain and simple. You could take the perfect integral over the "pixel rectangle" and you would still get aliasing.
This thread contains some very interesting discussions and examples.
 
And he's wrong.
I might believe you if you provide links to all the publications (in respected journals) that you have done. :p
Seriously though...
Sorry if this sounds too knee-jerk, but the display devices we use generate colored rectangular pixels, or they try really hard to do that (and come ever closer), and as that is the physical reality of pixels, it is these rectangular shapes that graphics algorithms must find the best possible color for.
I see a few problems with this.
1) The display devices do not produce little rectangles. See Glassner's (I assume you might have heard of him), "Principles of Digital Image Synthesis" Volume 1, Chapter 3, for the behaviour of CRTs.
2) If you assume that the display device is producing little rectangles of light other things, then you will actually need to filter even more because it is further away from the ideal sinc reconstruction.
3) The eye is going to do some low pass filtering on the output anyway (note that this is post-, not pre-, filtering)

As for Mr Smith, most of the insights into graphics that came out of Microsoft in the mid-to-late nineties have already turned out to be much less than insightful.
Here is where you are showing your ignorance and prejudice. "Mr Smith" may be working for Microsoft (which is perfectly fine, IMHO) but he was was one of founders of the Pixar (and before that was part of the computer graphics section of LucasFilm), which was, perhaps, the leading research group into computer graphics of its time. Perhaps you could try doing some research on his publications.
 
I see a few problems with this.
1) The display devices do not produce little rectangles. See Glassner's (I assume you might have heard of him), "Principles of Digital Image Synthesis" Volume 1, Chapter 3, for the behaviour of CRTs.
Who's using CRT's these days?! LCDs do have little rectangles of light which is why they look all jaggy and crisp. Future techs will continue this trend. The pixel count of the display is the number of little rectangles/squares that can light up. The pixel count of a front-buffer is the number of little rectangles/squares as colour locations. It's a 2D array with discrete fields, one for each colour. Each field is a pixel (Picture Element). In creating an image, you can factor in all sorts of imaging models. That doesn't prevent the fact that you have to write one colour value for each pixel in the 2D array of colours, which is rendered on screen (LCD) as one little box of discrete light in one of it's pixel constructions. Render a white angled line on a black background, and those discrete colour values appear as discrete colours on screen, with lovely little stepping.
 
Who's using CRT's these days?! LCDs do have little rectangles of light which is why they look all jaggy and crisp.

LCDs have red, green and blue sub-pixels offset from each other, which is what ClearType (and similar methods) exploit to smooth out pixels on your LCD screen. The only reason you perceive it as a rectangle is because of the screen door effect most LCD computer screens have, and which LCD TV makers go to great length to remove.

Cheers
 
Anyone seen examples of CSAA being used? Sounds promising I was just reading how the new 8600 series cards from NVidia will use CSAA because they have a 128-bit memory bus. Could this be used on RSX?

Here is some info:

"CSAA produces antialiased images that rival the quality of 8x or 16x MSAA, while introducing only a minimal performance hit over standard (typically 4x) MSAA. It works by introducing the concept of a new sample type: a sample that represents coverage. This differs from previous AA techniques where coverage was always inherently tied to another sample type. In supersampling for example, each sample represents shaded color, stored color/z/stencil, and coverage, which essentially amounts to rendering to an oversized buffer and downfiltering. MSAA reduces the shader overhead of this operation by decoupling shaded samples from stored color and coverage; this allows applications using antialiasing to operate with fewer shaded samples while maintaining the same quality color/z/stencil and coverage sampling. CSAA further optimizes this process by decoupling coverage from color/z/stencil, thus reducing bandwidth and storage costs."
 
It depends on if it's a purely software implementation or some hardware was added to the G8x ROPS specifically for it.
 
Anyone seen examples of CSAA being used? Sounds promising I was just reading how the new 8600 series cards from NVidia will use CSAA because they have a 128-bit memory bus. Could this be used on RSX?

Here is some info:

"CSAA produces antialiased images that rival the quality of 8x or 16x MSAA, while introducing only a minimal performance hit over standard (typically 4x) MSAA. It works by introducing the concept of a new sample type: a sample that represents coverage. This differs from previous AA techniques where coverage was always inherently tied to another sample type. In supersampling for example, each sample represents shaded color, stored color/z/stencil, and coverage, which essentially amounts to rendering to an oversized buffer and downfiltering. MSAA reduces the shader overhead of this operation by decoupling shaded samples from stored color and coverage; this allows applications using antialiasing to operate with fewer shaded samples while maintaining the same quality color/z/stencil and coverage sampling. CSAA further optimizes this process by decoupling coverage from color/z/stencil, thus reducing bandwidth and storage costs."

Sounds awesome in theory for cards with very limited bandwidth. I hope it can be implemented in PS3 games.
 
Back
Top