"Pure and Correct AA"

This any good ?
done using paintshop pro 320x240 original size 1280x960 (monitor only does 1280x1024) not converted to 16bit
(ps: couldnt do 320x240 game doesnt support it)

1 = Bicubic
2 = Bilinear
3= Pixel Resize
4= Weighted Adverage
The last one is good, the other ones are not. The "bicubic", "bilinear" and "pixel resize" methods look as if they are using WAY too small filter kernels for the level of rescaling being done. The "weighted average" looks like a fairly accurate representation of (box-filtered, non-gamma-corrected) 16x supersampling, though.
 
This is what you get with Photoshop:

Bilinear:
bilinear.jpg


Bicubic:
bicubic.jpg
 
For comparison I've done bilinear:

lomacjl6BLGC.png


and bicubic:

lomacjl6BCGC.png


resizes of the original 1280 image in PS, using 16-bit linear space. For your viewing pleasure they're saved in sRGB space.

Jawed
 
For comparison I've done bilinear:

and bicubic:

resizes of the original 1280 image in PS, using 16-bit linear space. For your viewing pleasure they're saved in sRGB space.

Jawed

As far as my eyes can tell, the "bicubic" image actually looks noticeably less jaggy that the "bilinear" image; the difference is particularly visible in the upper-right part of the airstrip.

I presume that this is the kind of improvement that one can get from using downsampling filters other than the traditional 1x1 pixel box filter.
 
This kind of downsizing aliases, those filters only interpolate.

As far as I can tell from the pictures posted here, this is true for the PaintShop Pro filters used in Davros's post, but false for the Photoshop filters used in the postings of Humus & Jawed.
 
This kind of downsizing aliases, those filters only interpolate.
I don't have an off-the-shelf sinc filter handy :cry: - so it's a question of how much aliasing is introduced by, for example, the 16:1 bicubic resizing.

This page has a brutal methodology for testing interpolation, rotating an image in 5 degree intervals 36 times, then a final 180 degree rotation:

http://www.all-in-one.ee/~dersch/interpolator/interpolator.html

Compare the PS bicubic result achieved on that page, with the PS 16-bit linear space version of the same process:

z256.png


I imagine that the sinc would look better if it too were done in 16-bit linear space.

Jawed
 
This page has a brutal methodology for testing interpolation
One which I don't particularly agree with. It can not test the interpolation quality in the intermediate steps ... which is what I'm personally interested in. Why bother rotating at all? Just do nothing in every interpolation step ... hey presto, ultimate performance on this benchmark.

IMO this type of testing should be done by taking a very high resolution image, generating intermediate images through footprint filtering and then trying to interpolate from one intermediate to the other. That way you can have an objective benchmark for the actual quality of the interpolation and not it's invertibility.
 
One which I don't particularly agree with. It can not test the interpolation quality in the intermediate steps ... which is what I'm personally interested in. Why bother rotating at all? Just do nothing in every interpolation step ... hey presto, ultimate performance on this benchmark.
Umm so what when I do sample the cordinates at 0.35,0.2 you would return what 0? or maybe a null pointer? Because after every rotation you would do another render to texture and map then sample against the new texture. You can't magically at the end go back and sample the original image.
 
One which I don't particularly agree with. It can not test the interpolation quality in the intermediate steps ... which is what I'm personally interested in. Why bother rotating at all? Just do nothing in every interpolation step ... hey presto, ultimate performance on this benchmark.
The page's author was working to compare available image processing software for the sake of image rotation, sizing, stretching etc. ultimately to assess the comparative quality of his own Panorama Tools application.

IMO this type of testing should be done by taking a very high resolution image, generating intermediate images through footprint filtering and then trying to interpolate from one intermediate to the other. That way you can have an objective benchmark for the actual quality of the interpolation and not it's invertibility.
This started out as an exercise to generate a 320x240 resolution "ideal" target image, to use as a reference against which one might compare the AA algorithms out there.

We're not trying to quantify the quality of interpolation per se, merely to find a readily available good-enough technique to produce a target image.

Without writing an interpolator, how can we generate such a target? Do you have some ultra-hiigh quality interpolators handy?

Jawed
 
Umm so what when I do sample the cordinates at 0.35,0.2 you would return what 0? or maybe a null pointer? Because after every rotation you would do another render to texture and map then sample against the new texture. You can't magically at the end go back and sample the original image.
I'm pretty sure he simply padded the image before rotation. If not, just iterate over the pixels left right top down each time in source and target image and copy the pixels unchanged ... same perfect result in the end, and still completely irrelevant.

Jawed, I don't actually know of any good resizers for large ratio downsizing (or rather I don't know exactly what algorithms programs use, there might be good ones among them). What you want to do is simply do a weighted average over a footprint of each pixel in the source image, in linear color space, a gaussian with a standard deviation of ~1 for instance. Boxes are a bad idea, they are isotropic ... not to good with stills, worse in motion. Even a circular sinc if you really want (does not give the same result as resampling in horizontal and vertical direction with the sinc kernel, since in 2D that's effectively also an isotropic kernel ... the gaussian is the only separable anisotropic kernel).
 
Last edited by a moderator:
I'm pretty sure he simply padded the image before rotation. If not, just iterate over the pixels left right top down each time in source and target image and copy the pixels unchanged ... same perfect result in the end, and still completely irrelevant.
But that isn't interpolation is it. He rotates the Quad then samples against the orginal image. We aren't talking about moving pixels we are talking about doing texture sampling. Texture sampling doesn't involve suffling around pixel and there is no way you can tell the rotation is until they have done multiple texture samples and then they could randomly sample position and discard the fake sample positions.

If you really think you can do it I'll come up with a test program in java or something and you can fill in the texture sample code. Once I start doing a random number of rotations and directions and discard random samples I think you'll have fun faking it.
 
Last edited by a moderator:
But that isn't interpolation is it.
Call it what you want, it still gets perfect results.

I call it a thought experiment which constitutes proof that there is no monotonic relation between quality in the intermediate steps and quality after 360 degrees of rotation.
 
Doesn't Nyquist state that supersampling is the only way to completely solve frequency aliasing? Filtering can never lead to perfection in that respect, according to my understanding of what that sampling theorem states at least.
As Simon commented, pre-filtering is good while post-filtering does nothing but add blur, which must never be regarded as anti-aliasing.
From pg 643 from "Computer Graphics: Principles and Practice 2nd edition in C"
Note that, no matter what filter is used to postfilter the samples, damage caused by an inadequate initial sampling rate will not be repaired.
 
You have to posit a footprint for the pixel samples and you have to posit the reconstruction filter you use for textures (ie. the magnification filter). After that infinite supersampling is the pure and correct way of doing what a real-life camera would do given the same scene ... approaching that as close as possible is the correct way of doing AA, and that does result in blending multiple samples together in one way or another.
If I said "pure and correct AA" is about working on a pixel and not about working/combining/blending neighbouring pixels, would I be crazy?
 
Actually, maybe Rev could ask JC and TS
Wait, you're asking me to ask them and post their replies here, right?

Refreshing some memories (without any kind of "name-dropping") :
A game programmer/developer once emailed me said:
A hardware feature that would be a very helpful component to address all forms of aliasing would be the ability to have the exact pixel centers (probably on a per-quad basis so derivatives are still correct) jittered in a controllable way instead of being on a fixed grid. Just jittering the texel sampling point in a shader does reduce all forms of in-surface aliasing (at the expense of adding noise, of course), but it would be nice to have that unify geometric anti-aliasing as well.
Not "pure/correct AA" but IMO this helps a lot.

what they think about the great "shader aliasing" debate, and how they feel about being left alone to deal with it as if somehow it was their fault/problem alone.
I don't think any knowledgeable person could point fingers at ISVs. See above; at least one of the ISVs have been giving this some thought.

As the leading engine makers, their comments might be interesting on what they feel would be the best way forward for the industry that would be both robust and friendly. Is it an API issue? An IHV issue? An engine-maker issue? An individual ISV issue? Some combination? Where are the relative responsibilities, and who should rightfully have the onus on them to lead the way forward?
I wouldn't use the word "onus" but I feel the responsibility should rest entirely on the shoulders of the IHVs.

IMO, I don't think this is an "issue" for anyone as you think it is (or even a combination). It's "one of those things" when it comes to displayed graphics (as it stands right now).

This is a question that has been bothering me for the last year, as I sense a lot of finger pointing going on, and a severe lack of leadership and cooperation to solve a growing problem that's only going to get worse. . .
Aliasing isn't a growing problem (in case that's what you meant). The problem is "growing" because more gamers have become more educated about graphics. The blaming part is understandable but is a result of folks not understanding where the real problem of not tackling aliasing (or, the real problem of aliasing) lies.
 
  • Like
Reactions: Geo
Call it what you want, it still gets perfect results.

I call it a thought experiment which constitutes proof that there is no monotonic relation between quality in the intermediate steps and quality after 360 degrees of rotation.

Except that it isn't a 360 rotation it is 10 degree rotation. It is a 10 degree rotation on the image of the 35th image in a series of 10 degree rotations. Its comparable to saying that you minify an image 10000x and then manify it 10000x and say that you can create a interpolate that will do it perfectly with no image quality loss because over all it is a 1x magnification which is the same as the orginal. I'm sure most people here would disagree with you.
 
With minification the number of pixels changes and the trick can't work. With proper rasterization a square will have the same number of pixels each time regardless of rotation around it's center, so it will.

To make you feel better ... there is no monotonic relation between quality in the intermediate steps and quality after 360 degrees of rotation done in 10 degree steps.
 
Keep the image the same size and for sampling co-ordinates outside the bound using the value of black as per this image http://www.all-in-one.ee/~dersch/interpolator/PT_cubic75_z256.GIF
that way you can keep the same number of pixels. If your not happy with that option feel free to wrap the texture co-ordinates as well.

If there is a blending of pixels then and an attempt to seperate that blending which will occur in all but a few cases during mulitple rotations ( 90 degree mulitples assuming your using a square pixel system). This is a monotonic relation with quality as each step further reduces quality.

Funnily enough there is the same problem of trying to seperate out blended information in the minification and then magnification problem I proposed to you. So either both of them have a monotonic relation with quality or neither. So go ahead solve the problem and prove yourself wrong.
 
Last edited by a moderator:
This page has a brutal methodology for testing interpolation...
I really wonder what that would look like with the 'geometric pixel coverage' sampler Humus proposed. One part of me sais it would produce perfect results, while the other part sais it would blend colors from neighboring pixels, smearing things out fairly quickly. An (infinite) sinc filter retains all information and the negative weights prevent smearing.
 
Back
Top