"Pure and Correct AA"

"Correct AA" would then be 0.1 * red + 0.55 * blue + 0.45 * green.
No, that would be correct coverage. But a box filter isn't particularly good at curing aliasing.

For instance, there are loads of monitors out there with a response curve that's a power like 1.8 up to maybe 2.6. sRGB assumes 2.2 and that's reasonable to use in practice, but no monitor out there exactly matches this, particularly not after the user has tweaked brightness and contrast settings.
As long as you're doing all color blending in linear color space the response curve of the monitor doesn't matter.
Matching the response curve of the monitor is the task of the pixel output pipeline ("RAMDAC"), after AA has been resolved. Everything before that should be processed in linear color space, using sRGB only as an encoding for more efficient storage in fixed point formats.


If I said "pure and correct AA" is about working on a pixel and not about working/combining/blending neighbouring pixels, would I be crazy?
I wouldn't say crazy, but wrong. ;)
A single pixel won't show spatial aliasing. When you're truly talking about curing aliasing, you're talking about removing the frequencies which cannot be represented in the final image resolution. To do this "pure and correct" you would indeed need a sinc filter on the original signal.

However, it is very important to understand that perfect antialiasing and perfect subjective image quality are two different concepts. In most cases a sinc filter won't give you the best perceived quality, but neither will the other end of the scale, a box filter.

A "good" filter will be somewhere in between, depending on scene content, display technology and pixel density, taking at least some neighbouring samples into account.


I wouldn't use the word "onus" but I feel the responsibility should rest entirely on the shoulders of the IHVs.
I don't think I agree with that. IHVs give developers the tools to work with, and those tools certainly have room for improvement (multisample-aware shaders, for example). But if developers want the flexibility of a programmable pipeline, they also have to take responsibility for side effects.
 
I'm pretty sure he simply padded the image before rotation. If not, just iterate over the pixels left right top down each time in source and target image and copy the pixels unchanged ... same perfect result in the end, and still completely irrelevant.
OK, I'm completely confused here, there's no padding being used. The image I posted has no padding. Dersch hasn't used any padding in any of the tests he conducted.

Jawed, I don't actually know of any good resizers for large ratio downsizing (or rather I don't know exactly what algorithms programs use, there might be good ones among them). What you want to do is simply do a weighted average over a footprint of each pixel in the source image, in linear color space, a gaussian with a standard deviation of ~1 for instance. Boxes are a bad idea, they are isotropic ... not to good with stills, worse in motion. Even a circular sinc if you really want (does not give the same result as resampling in horizontal and vertical direction with the sinc kernel, since in 2D that's effectively also an isotropic kernel ... the gaussian is the only separable anisotropic kernel).
So what you're saying is "magnify (oversample) the image, say 16x, so that each pixel can then be sampled 16x within its own original footprint, then perform a weighted average of those samples solely within the boundary of each original pixel (i.e. the set of 16 samples)".

Jawed
 
Except that it isn't a 360 rotation it is 10 degree rotation. It is a 10 degree rotation on the image of the 35th image in a series of 10 degree rotations. Its comparable to saying that you minify an image 10000x and then manify it 10000x and say that you can create a interpolate that will do it perfectly with no image quality loss because over all it is a 1x magnification which is the same as the orginal. I'm sure most people here would disagree with you.
Actually it is 36 rotations by 5 degrees. The rotation by 180 degrees at the end (which is "perfect", i.e. non-destructive) is merely to allow the viewer to easily compare the aliased result of the test with the start image.

I've got my fingers crossed some brave person will implement a 16x16 sinc in linear space to see how good it can work. But that's a bit OT...

Jawed
 
The image I posted has no padding. Dersch hasn't used any padding in any of the tests he conducted.
You are right, on closer inspection ... he didn't use padding, he simply only used the central part of the image for comparison reasons.
 
You are right, on closer inspection ... he didn't use padding, he simply only used the central part of the image for comparison reasons.
And it's notable, for instance, that the 36 iterations of the 16x16 sinc destroys image data that's further than ~128 pixels from the centre, so he's got no choice about using the central part of the image for comparison!

Jawed
 
Last edited by a moderator:
I wouldn't use the word "onus" but I feel the responsibility should rest entirely on the shoulders of the IHVs.

Kirk washed his hands of it a year or so ago. Basically said if the ISV's come up with a solution, then he'll consider if there's something the IHVs can do to accelerate it in hardware.
 
I don't think I agree with that. IHVs give developers the tools to work with, and those tools certainly have room for improvement (multisample-aware shaders, for example). But if developers want the flexibility of a programmable pipeline, they also have to take responsibility for side effects.

I'd have said the API owner, as advised by the stakeholders (i.e. IHVs and ISVs). If not enforced thru the API (which I agree is probably impossible because of your flexibility point) then at least addressed thru SDKs, tools, code samples, understood and communicated best practices to achieve optimum results. . . all that good stuff. I haven't seen any evidence that any of that is going on re shader aliasing. But then I'm not necessarily in a position where I would (tho I'd expect that we have enough of that population here who are in such a position that it would have been mentioned where I'd see it). Is it going on? Is there a software development tools infrastructure in place that is helping ISVs address this issue?

Nvidia owns Cg for instance, so while Kirk was washing his hands on the hardware side they could have exhibited some leadership on the software tools side. Tho if tied too closely to Cg then it wouldn't be as useful industry-wide, it'd at least be something. And maybe it wouldn't be required to be that closely tied to Cg.
 
Couldn't you simply supersample inside the shader? The only advantage of being able to address subpixel samples is that occlusion and polygon edges would be handled more correctly, but that slight approximation is already present with multisampling and normal texturing and doesn't seem to cause huge problems.

Either way it's going to be slow ... there is very little to be done about that.
 
Kirk washed his hands of it a year or so ago. Basically said if the ISV's come up with a solution, then he'll consider if there's something the IHVs can do to accelerate it in hardware.

Are you by chance referring to the Q&A where he also gave some rather cryptic answer for HDR and MSAA combinations? If yes I'm willing to bet that with G80 on shelves that tune would sound entirely different if you'd ask him today.
 
Couldn't you simply supersample inside the shader? The only advantage of being able to address subpixel samples is that occlusion and polygon edges would be handled more correctly, but that slight approximation is already present with multisampling and normal texturing and doesn't seem to cause huge problems.

Either way it's going to be slow ... there is very little to be done about that.

How "slow" would it really be if we'd be talking about let's say 2x or 3x sample selective SSAA inside the shader? Would really all shaders need to be supersampled or could one get away with only the most critical cases in a scene?
 
Are you by chance referring to the Q&A where he also gave some rather cryptic answer for HDR and MSAA combinations? If yes I'm willing to bet that with G80 on shelves that tune would sound entirely different if you'd ask him today.

Yes;maybe. He's welcome to drop in and address it at any time. ;)
 
No, that would be correct coverage. But a box filter isn't particularly good at curing aliasing.

With square pixels blending according to coverage of the corresponding square is correct IMHO. That's what you'll get if you take a picture with a camera (if we ignore scattering in the lens and other undesirable artifacts), the integral value of the incoming light behind that pixel (of course CCDs suffer from less than 100% coverage like pixels on a screen too though).

As long as you're doing all color blending in linear color space the response curve of the monitor doesn't matter.
Matching the response curve of the monitor is the task of the pixel output pipeline ("RAMDAC"), after AA has been resolved. Everything before that should be processed in linear color space, using sRGB only as an encoding for more efficient storage in fixed point formats.

I strongly disagree. Why do you think ATI introduced gamma-correct AA? Because the response curve matters, which is what the gamma-correct AA takes into account and that's why it looks clearly superior to blending the pixels in linear space. Well, I guess you could call it linear in the sense that you need to blend in linear monitor light emittance space. Correct blending is done after RAMDAC, which could be done, but without the proper hardware for it we have to assume the curve is a proper 2.2 one. That's still a lot better than just doing it in linear space. You can see this clearly by taking any antialiased screenshot and boost or reduce the gamma. The AA effect will get more or less lost if you go high or low.
 
Humus, between the anti-aliasing filter and the demosaicking with bayer pattern CCDs things don't quite work out like that. The PSF is more like a gaussian with a flattened top than a box AFAIR.
 
With square pixels blending according to coverage of the corresponding square is correct IMHO. That's what you'll get if you take a picture with a camera (if we ignore scattering in the lens and other undesirable artifacts), the integral value of the incoming light behind that pixel (of course CCDs suffer from less than 100% coverage like pixels on a screen too though).
Digital colour cameras (with their bayer sensors) are not a good source of reference images:

http://research.microsoft.com/~rcutler/pub/Demosaicing_ICASSP04.pdf

unless you do a fair amount of down-sampling - which brings us back to our initial problem, what's the best way to sample...

Jawed
 
But if developers want the flexibility of a programmable pipeline, they also have to take responsibility for side effects.
Aliasing in computer graphics is a side effect of flexible programmability and developers are partly to blame? I'm sorry but can you repeat that or clarify?
 
With square pixels blending according to coverage of the corresponding square is correct IMHO. That's what you'll get if you take a picture with a camera (if we ignore scattering in the lens and other undesirable artifacts), the integral value of the incoming light behind that pixel (of course CCDs suffer from less than 100% coverage like pixels on a screen too though).

It's possible that a pixel sized bounding box is the actual implementation, but that doesn't mean it's the best one.

Once again, when it comes to pure anti-aliasing, a sinc filter is theoretically the best solution. Like the IHV AA expert said: not opinion, just math and science. And that filter has major components outside the bounding box of the pixel. (Unfortunately, negative for the pixels next door, so basically impossible to realize in real life.)

It is possible that different filter profiles provide subjectively more 'pleasing' images, but that's orthogonal to pure anti-aliasing. Just like images that come straight out of high-end digital SLR camera's look often more dull than those from a cheap snapshot camera, because the latter apply very liberal amounts of (over-)saturation and contrast enhancement.
 
I strongly disagree. Why do you think ATI introduced gamma-correct AA?
Humus, I think you missed the fact that Xmas said to do the calculations in linear space which means that it is being done correctly.
 
Actually it is 36 rotations by 5 degrees. The rotation by 180 degrees at the end (which is "perfect", i.e. non-destructive) is merely to allow the viewer to easily compare the aliased result of the test with the start image.
It really rather irrelevant though 36 by 10 degree would give you a similar test.
 
Back
Top