I think it needs clarifying for the point of the original post though. When Betanumerical asks about AA on SPEs, is he thinking higher information 'proper' AA, or just some form of jaggy reduction which can have negative impact on IQ elsewhere?
Not nice in what sense? It's certainly not nice in that it's brute force and that it doesn't scale well, but it's nice in the sense that you have more information which is about as correct as you can get given how wrongly it's being rasterized in the first place.I'm baffled, oversampling is not a nice way to do AA, is A way to do AA.
Can't you use something a little harsher than that ? Say, "misguided information destruction"? Sure, it's not compensating for the loss of information caused by undersampling, but the idea is to destroy more information so that nobody can tell that there was a problem in the first place.Edge filtering is not a way to remove aliasing, as it removes everything, maybe it should be called with its name: blur. We already have a name for it, I don't see why we should use another one.
The right place to do that is before filtering, when it's possible (see mip maps)Sure, it's not compensating for the loss of information caused by undersampling, but the idea is to destroy more information so that nobody can tell that there was a problem in the first place.
I'm not saying that doing edge blurring is wrong or immoral, I'm saying that is wrong calling it anti-aliasing.In any case, as wrong as it is on paper, it all boils down to the visual effect -- if it's a blur, it's at least a blur designed to target the parts complained about...
PPL are easy to fool but even a baby can tell the difference between a properly anti-aliased image and a blurred one, especially if stuff is in motion.People are easy to fool, and that's why there were so many tragic MSAA implementations over the years. In a purely visual sense, I don't mind calling them "de-artifacting smarts blurs" or "AA fake trickery" or something, but I do agree that it's not real AA.
But 2xSai for example isn't a blur filter.I'm not saying that doing edge blurring is wrong or immoral, I'm saying that is wrong calling it anti-aliasing.
I think it needs clarifying for the point of the original post though. When Betanumerical asks about AA on SPEs, is he thinking higher information 'proper' AA, or just some form of jaggy reduction which can have negative impact on IQ elsewhere?
People often forget that aliasing happens in audio as well, even when dealing with frequencies below the Nyquist limit of your sampling rate. As the name implies, it's just the idea of a wave at some frequency that gets perceived as something of a different frequency entirely (hence "aliasing"). We get used to certain examples like videos of cars that move forward but look like their wheels are turning backwards -- just another example of the same thing. In the case of audio, they attack the problem by just oversampling (e.g. 192 KHz samplerates). It doesn't actually get rid of aliasing (discrete sampling can always alias), it just moves the aliasing into a very high frequency range beyond what the human ear can pick up.Computer gfx people can define AA as they see fit although it would be weird without knowing what the term alias refers to.
Well, there are approaches that I don't know if I'd call them "blur" filters per se... or at least the techniques themselves aren't really blurs, but there's no way you can avoid the fact that some blurring will result because that's just the way the universe works.It's blur, why should we call it with another name which is already used for something different?
Given the way people are, I'm not so sure. Maybe you give people more credit than I do. Well, a baby might be able to, but that ability fades to nothingness upon having developed far enough to be able post on an Internet forum .PPL are easy to fool but even a baby can tell the difference between a properly anti-aliased image and a blurred one, especially if stuff is in motion.
Isn't majority of game media released nowadays exactly that? It's super-sampled with ridiculous sample counts, usually using non-ordered grid(poisson distributions FTW), and of course better then box-filter averaging.ShootMyMonkey said:For that matter, I can't say I'm sure anybody has ever really seen a properly anti-aliased image.
Since this a sampling issue you cannot do that after sampling using the samples that are the result of aliasing.
If you really want to be pedantic, it doesn't. If you tried to sample, say, an audio signal that contained a significant frequency component of, say, (192- 5)kHz, you'd end up with something that would sound like a 5kHz signal.P In the case of audio, they attack the problem by just oversampling (e.g. 192 KHz samplerates). It doesn't actually get rid of aliasing (discrete sampling can always alias), it just moves the aliasing into a very high frequency range beyond what the human ear can pick up.
It's blur, why should we call it with another name which is already used for something different?
Perhaps Jaggie-reduction, or de-aliasing.
Oversampling is nothing more than a blur btw.
No it's not. Oversampling is shifting the critical Nyquist frequency higher, resulting in fewer alias problems. Blur at the native resolution results only in blurrier aliasing problems.
You can't blur a discontinued trail of pixels that used to be a thin line back into continuity.
Exact reconstruction of a continuous-time baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.
Could you(anybody?) define what is the original "baseband signal" and what is the "signal bandwith" in the case of "rasterized rendering".
Oversampling results in fewer aliasing problems but you still need to blur (or low pass filter) before subsampling to native resolution to avoid further aliasing. That's what all those averaging masks do.No it's not. Oversampling is shifting the critical Nyquist frequency higher, resulting in fewer alias problems.
Exact reconstruction of a continuous-time baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.
Could you(anybody?) define what is the original "baseband signal" and what is the "signal bandwith" in the case of "rasterized rendering".
Not if you want to animate it..The highest frequency signal you can represent in a pixel grid is a checkerboard pattern - one pixel at maximum value, the other at minimum value.
I don't know how this can be "bandlimited" so it can be represented in a pixel grid without aliasing
Ideally the original signal is the 2D continuous signal projected from 3D continuous space.
It's not bandlimited and exact reconstruction is not possible.