Using the SPU's to do AA

I think it needs clarifying for the point of the original post though. When Betanumerical asks about AA on SPEs, is he thinking higher information 'proper' AA, or just some form of jaggy reduction which can have negative impact on IQ elsewhere?
 
I'm baffled, oversampling is not a nice way to do AA, is A way to do AA.
Edge filtering is not a way to remove aliasing, as it removes everything, maybe it should be called with its name: blur. We already have a name for it, I don't see why we should use another one.
It's clearly not a semantics issue, let's call things with their names instead of applying fancy terms in the wrong way.
 
I'm fully with Marco here, post filter effects can't compensate the loss of information.
 
I'm baffled, oversampling is not a nice way to do AA, is A way to do AA.
Not nice in what sense? It's certainly not nice in that it's brute force and that it doesn't scale well, but it's nice in the sense that you have more information which is about as correct as you can get given how wrongly it's being rasterized in the first place.

Edge filtering is not a way to remove aliasing, as it removes everything, maybe it should be called with its name: blur. We already have a name for it, I don't see why we should use another one.
Can't you use something a little harsher than that :p? Say, "misguided information destruction"? Sure, it's not compensating for the loss of information caused by undersampling, but the idea is to destroy more information so that nobody can tell that there was a problem in the first place. ;)

In any case, as wrong as it is on paper, it all boils down to the visual effect -- if it's a blur, it's at least a blur designed to target the parts complained about... People are easy to fool, and that's why there were so many tragic MSAA implementations over the years. In a purely visual sense, I don't mind calling them "de-artifacting smarts blurs" or "AA fake trickery" or something, but I do agree that it's not real AA.
 
Last edited by a moderator:
Computer gfx people can define AA as they see fit although it would be weird without knowing what the term alias refers to.

Technically speaking edge blur or any kind of low pass filtering after sampling is not AA.

I think part of the confusion arises from the popular belief that AA means getting rid of aliasing jaggies. However, anti-aliasing means preventing aliasing phenomenon. Since this a sampling issue you cannot do that after sampling using the samples that are the result of aliasing.

Still, frequent reference to "loss of information" may be misleading, as it's unavoidable even with infinite OR full AA.
 
Sure, it's not compensating for the loss of information caused by undersampling, but the idea is to destroy more information so that nobody can tell that there was a problem in the first place. ;)
The right place to do that is before filtering, when it's possible (see mip maps)
In any case, as wrong as it is on paper, it all boils down to the visual effect -- if it's a blur, it's at least a blur designed to target the parts complained about...
I'm not saying that doing edge blurring is wrong or immoral, I'm saying that is wrong calling it anti-aliasing.
It's blur, why should we call it with another name which is already used for something different?
People are easy to fool, and that's why there were so many tragic MSAA implementations over the years. In a purely visual sense, I don't mind calling them "de-artifacting smarts blurs" or "AA fake trickery" or something, but I do agree that it's not real AA.
PPL are easy to fool but even a baby can tell the difference between a properly anti-aliased image and a blurred one, especially if stuff is in motion.
 
I'm not saying that doing edge blurring is wrong or immoral, I'm saying that is wrong calling it anti-aliasing.
But 2xSai for example isn't a blur filter. :cool:

Yet it smooths out stairstepping artefacts - at least when upsampling from a low-res low-color 2D image. I've no idea how well it could work on truecolor 3D renderings..

However there might well be other re-sampling kernels that could indeed work better on such images without neccessarily being a blur filter.

Peace.
 
I think it needs clarifying for the point of the original post though. When Betanumerical asks about AA on SPEs, is he thinking higher information 'proper' AA, or just some form of jaggy reduction which can have negative impact on IQ elsewhere?

My understanding of AA was that it just removed jaggies, could you use a filter like they do on the new ati card (HD2900's, etc) ?
 
Computer gfx people can define AA as they see fit although it would be weird without knowing what the term alias refers to.
People often forget that aliasing happens in audio as well, even when dealing with frequencies below the Nyquist limit of your sampling rate. As the name implies, it's just the idea of a wave at some frequency that gets perceived as something of a different frequency entirely (hence "aliasing"). We get used to certain examples like videos of cars that move forward but look like their wheels are turning backwards -- just another example of the same thing. In the case of audio, they attack the problem by just oversampling (e.g. 192 KHz samplerates). It doesn't actually get rid of aliasing (discrete sampling can always alias), it just moves the aliasing into a very high frequency range beyond what the human ear can pick up.

It's blur, why should we call it with another name which is already used for something different?
Well, there are approaches that I don't know if I'd call them "blur" filters per se... or at least the techniques themselves aren't really blurs, but there's no way you can avoid the fact that some blurring will result because that's just the way the universe works.

PPL are easy to fool but even a baby can tell the difference between a properly anti-aliased image and a blurred one, especially if stuff is in motion.
Given the way people are, I'm not so sure. Maybe you give people more credit than I do. Well, a baby might be able to, but that ability fades to nothingness upon having developed far enough to be able post on an Internet forum ;).

For that matter, I can't say I'm sure anybody has ever really seen a properly anti-aliased image. The closest thing they've seen is straight up supersampling, and usually not sufficient supersampling, and that too, with an ordinary box blur to combine your samples.
 
ShootMyMonkey said:
For that matter, I can't say I'm sure anybody has ever really seen a properly anti-aliased image.
Isn't majority of game media released nowadays exactly that? It's super-sampled with ridiculous sample counts, usually using non-ordered grid(poisson distributions FTW), and of course better then box-filter averaging.

It started as only print-media, but these days it's just common practice.
 
Since this a sampling issue you cannot do that after sampling using the samples that are the result of aliasing.

That's a good point. But that's not really a matter of before/after, that's more the difference between:
1) methods which use the same screen-resolution sampling definition
2) methods which use another (finer one)

According to Yoda-Nao the one is a trickery of the mind and the second one is the bright side of the force :smile:
 
P In the case of audio, they attack the problem by just oversampling (e.g. 192 KHz samplerates). It doesn't actually get rid of aliasing (discrete sampling can always alias), it just moves the aliasing into a very high frequency range beyond what the human ear can pick up.
If you really want to be pedantic, it doesn't. If you tried to sample, say, an audio signal that contained a significant frequency component of, say, (192- 5)kHz, you'd end up with something that would sound like a 5kHz signal.

Of course, with audio signals, it's trivial to put in a low-pass analogue filter that kills anything that is close to, or above, the Nyquist limit.

The problem with 3D graphics is that we can't easily put in such a pre-filter. It'd be incredibly difficult to do and so we just resort to sampling at a higher rate and hope that any higher frequency components are insignificantly small. You then put in a post-filter to remove frequencies that are above that which is displayable in the target resolution.
 
It's blur, why should we call it with another name which is already used for something different?

Imo blur is the method, AA is the goal. Oversampling is nothing more than a blur btw.

What everybody want is a signal on the screen whose bandwidth match the one that the screen can reproduce: to high brings aliasing, to low create blur. Two side of the same thing really.

Being stuck in a sandwich, there's nothing wrong with transforming some aliasing into blur if that's making the picture nicer. I reckon that "people" are happier with a little bit of extra blur to compensate aliasing. The reason for that is... TV!

Anyway, the conclusion is that "what you call blur" and "what you call AA" both remove aliasing.

Perhaps Jaggie-reduction, or de-aliasing.

For the sake of the discussion I'm quite happy following this naming convention.
Blur and AA both belongs to "de-aliasing". The distinction is that the blur methods allow themselves to go bellow the highest frequency reproducable by the screen.
 
The worst kind of aliasing is not jagged edges, it is shimmering textures, fences, grids, lattices, vegetations etc. Edge blurring doesn't do anything about them. It should never ever called be antialiasing, and, if you ask me, should not be allowed on games using normal, photorealistic renderers, by the first-party certification teams. There is never a good reason to do edge blurring, other than you're running out of time and want to slap an ugly hack to hide some of the edges. It should take its rightful place in the game visuals hall of shame along with blooming, lense flare and sliding feet animations.
 
Oversampling is nothing more than a blur btw.

No it's not. Oversampling is shifting the critical Nyquist frequency higher, resulting in fewer alias problems. Blur at the native resolution results only in blurrier aliasing problems. You can't blur a discontinued trail of pixels that used to be a thin line back into continuity.
 
No it's not. Oversampling is shifting the critical Nyquist frequency higher, resulting in fewer alias problems. Blur at the native resolution results only in blurrier aliasing problems.

Imo, oversampling is nothing more than "blurrier aliasing problems" except that the resulting blurred signal still have "high-enough" frequency.

But I might be talking out my ass here :oops:... And to be perfectly honest I'm not quite sure how to apply the "Nyquist–Shannon sampling theorem" here.

Exact reconstruction of a continuous-time baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.

Could you(anybody?) define what is the original "baseband signal" and what is the "signal bandwith" in the case of "rasterized rendering".


You can't blur a discontinued trail of pixels that used to be a thin line back into continuity.

Imo the problem here is that a "line" doens't really have a 3D existence.
If it has (a 3D rope viewed from far away for instance), there's nothing wrong with doing so.
 
Exact reconstruction of a continuous-time baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.

Could you(anybody?) define what is the original "baseband signal" and what is the "signal bandwith" in the case of "rasterized rendering".

The highest frequency signal you can represent in a pixel grid is a checkerboard pattern - one pixel at maximum value, the other at minimum value.

The "original signal" in the case of rasterized rendering is the abstract mathematically perfect version of the image, rendered to an infinitely high resolution, without any raster grid whatsoever. I don't know how this can be "bandlimited" so it can be represented in a pixel grid without aliasing; maybe in some particular instances you can - e.g. a thick line drawing algorithm where you can calculate pixel coverages at the edges of the line.

Supersampling in that framework of thought is the following: let's say you need a final image of 1280x720. Assume that the "original signal" is not the mathematically perfect, pixel-less version of the image, but simply the twice as big, 2560x1440 image. If you have this image, the only way to fit it into a 1280x720 version would be to band-limit it - if you, e.g., take only one pixel in a group of 2x2 original pixels, you'll get aliasing. To band-limit it, you subject it to a low-pass filter, a.k.a. blur it; one of the crudest way to blur it would be to replace 2x2 different pixels with a group of 4 pixels with the average color of the group. AFTER that, you can safely drop 3 of them, and get a 1280x720 image. So supersampling involves blurring, but of the (pseudo-)original supersampled image.
 
No it's not. Oversampling is shifting the critical Nyquist frequency higher, resulting in fewer alias problems.
Oversampling results in fewer aliasing problems but you still need to blur (or low pass filter) before subsampling to native resolution to avoid further aliasing. That's what all those averaging masks do.

Exact reconstruction of a continuous-time baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.

Could you(anybody?) define what is the original "baseband signal" and what is the "signal bandwith" in the case of "rasterized rendering".

Ideally the original signal is the 2D continuous signal projected from 3D continuous space.
It's not bandlimited and exact reconstruction is not possible.
 
The highest frequency signal you can represent in a pixel grid is a checkerboard pattern - one pixel at maximum value, the other at minimum value.
Not if you want to animate it..

Also
1) You are assuming the reconstruction is little rectangles which is not always the case.
2) This is 2x over Nyquist's limit.
 
I don't know how this can be "bandlimited" so it can be represented in a pixel grid without aliasing

I think this is not bandlimited

Ideally the original signal is the 2D continuous signal projected from 3D continuous space.
It's not bandlimited and exact reconstruction is not possible.

I'm more inclined to consider that point of view.
 
Last edited by a moderator:
Back
Top