High-order realtime image filtering

fearsomepirate

Dinosaur Hunter
Veteran
As part of my dissertation work, which is in computational fluid dynamics, I had to implement a 6th-order filter with a non-Gaussian transfer function in order to dealias the solution at each time step. In order to test the filter, code, I applied it to image files (filtering an LxMxN mesh of fluid data is quite similar to filtering an MxN image), since it's easy to visually inspect (also, filtering a 200 KB image is a lot faster than filtering a 1.2 GB flow field). And it did indeed to a good job of applying a low amount of smoothing while not blurring out detail.

I wondered, why isn't this sort of thing used for graphics? I know that 2nd-order filters result in undesirable amounts of blurring, but 4th-order and higher don't have such a big problem with this. Anyway, I googled a bit and could not find an answer. However, I assume there must be one. So, you graphics guys...is sub-pixel anti-aliasing just cheaper than a high-order filter? Or does this sort of thing actually get used, and I just don't know it?
 
As part of my dissertation work, which is in computational fluid dynamics, I had to implement a 6th-order filter with a non-Gaussian transfer function in order to dealias the solution at each time step. In order to test the filter, code, I applied it to image files (filtering an LxMxN mesh of fluid data is quite similar to filtering an MxN image), since it's easy to visually inspect (also, filtering a 200 KB image is a lot faster than filtering a 1.2 GB flow field). And it did indeed to a good job of applying a low amount of smoothing while not blurring out detail.

I wondered, why isn't this sort of thing used for graphics? I know that 2nd-order filters result in undesirable amounts of blurring, but 4th-order and higher don't have such a big problem with this. Anyway, I googled a bit and could not find an answer. However, I assume there must be one. So, you graphics guys...is sub-pixel anti-aliasing just cheaper than a high-order filter? Or does this sort of thing actually get used, and I just don't know it?

Just curious:

Which method do you use? Finite difference on Cartesian grids? Compact or standard finite difference? What is your goal application?
 
I'm wondering why it is used in cfd the pixels represent data, by filtering it your skewing it ???
ps: i know nothing about cfd :D

You can see it basically like this: the 'pixels', i.e. the data points, are discrete representations of an underlying continuous function, i.e. an analogous signal. Due to finite resolution, higher frequencies, frequencies higher than what is possible to represent by the amount of the 'pixels' used (Nyquist says that you need at least two points per wavelength, but this holds only for sinusoidal approximations. For polynomial approximations, the theory says it is Pi points, not two per wavelength) are aliased on representable frequencies and are thus polluted. Often, aliasing yields an amplification of the energy contained in the representable frequencies.

One possible task of a filter is to 'cut' back the amount of aliasing energy in the frequencies, the task is to clean up the frequencies represented by the data.

Another task of filtering in CFD: decompose a given solution into low frequency components and high frequency components. This is often used for analysis of the data, or is often the base of turbulence modeling.

I don't know for which of those tasks fearsome needs filtering...
 
I wondered, why isn't this sort of thing used for graphics?
Higher order polynomial filters are sometimes used for downsampling and translation in image processing (although generally implemented with a convolution kernel).
So, you graphics guys...is sub-pixel anti-aliasing just cheaper than a high-order filter?
What a strange question ... supersampling and MSAA simply sample at a higher frequency (in the latter case only for geometry, but since we tend to have separate texture AA that's not a huge issue). From a Fourier domain point of view this shifts the part of the spectrum which aliases higher where there is generally less energy, so reducing aliasing overall after downsampling (regardless of what filter you use, a higher sampling rate obviously does things a filter can't do ... or else why would you need any samples at all?). As for why we use rather simplistic downsampling filters, that's because high order filters don't tend to look all that good ...
 
Which method do you use? Finite difference on Cartesian grids? Compact or standard finite difference? What is your goal application?
I used NASA's OVERFLOW code, which is a finite volume structured solver in generalized Cartesian coordinates.
Davros said:
I don't know for which of those tasks fearsome needs filtering...
Both. I had a class of filters that I could to construct whatever transfer function I needed. I defend on Tuesday. Solutions of the NSEs require dealiasing because of the nonlinear term. If u is represented with N modes, then u^2 requires 2N modes. If your numerical method has N degrees of freedom, then every time step will create aliasing error.

The filtering method I used would be infeasible for real-time simulations (it requires solving a pentadiagonal matrix for every line and column in the image, plus it needs an edge-detection routine to avoid creating oscillations), but I was wondering if other high-order filters might be less of a fillrate/memory hog than sub-pixel antialiasing while still achieving acceptable results.

In other words, from what I understand, most anti-aliasing works something like this:

Point-sample lots of sub-pixels -> low-order filter and downsample to desired resolution

Since aliasing is largely at the tail end of the spectrum, this makes things nice, depending on exactly the method (like I know quincunx is overly dissipative). But I was wondering if what you did this:

Point-sample desired resolution -> High-order filter to eliminate aliasing

This wouldn't have an IQ as high as the first method, but I was wondering if it might obtain an acceptable IQ for substantially less cost, especially if a method with a very low-attenuation transfer function was used. Or maybe not. I was just wondering.
 
but I was wondering if other high-order filters might be less of a fillrate/memory hog than sub-pixel antialiasing while still achieving acceptable results.
I guess you missed the whole MLAA (and all it's descendants) thing the last couple of years? :) (Google it.)

Unfortunately this has indeed become popular ... it would be nice in addition to MSAA, but it's really unfortunate it has become a poor replacement instead ...

PS. when you just said higher order filter I assumed you meant a linear filter ...
 
Those blur filters, what a disgrace to call them AA... :p
Even SSAA uses filtering to dealias the image. Probably most of the filters you're thinking about are 2nd-order weighted average filters. I've been using a 6th-order Pade filter. The difference is that a WA filter works like:

U = Au

and a Pade filter works like

BU = Au,

which requires inverting B. This is probably too slow to do in a GPU. Maybe? I don't know. It's pretty fast when I do it, but I have 240 cores in parallel. :)

The code when I got it used a 5th-order weighted average filter. Here are the results before and after I replaced it with a 6th-order Pade filter (the mesh is really coarse):

screenshot20121029at230.png


And the above is 5th order! 2nd-order filters would just destroy everything (my adviser is really into 2nd-order filtering, but IMO it's pretty worthless, and I won't help him in his quixotic quest). The problem with the filters you're complaining about is they severely attenuate most of the spectrum, hence the blurriness. But there are a lot of filters used in computational physics that don't do that. I was wondering if they might be usable in real-time graphics.
 
Last edited by a moderator:
Even SSAA uses filtering to dealias the image.
We don't generally call it de-aliasing, because that assumes such a thing is possible ... the best thing you can do is make statistical assumptions about signal content and use those in reconstruction, either explicitly through say edge reconstruction, or implicitly through your choice of spline you use to interpolate it. If you assume white noise then a weighted average is going to be as good as it gets.

We just use what looks good and is stable enough not to look shite in movement, 6th order filters don't need apply ... those are not going to be stable on moving edges.

PS. to me it seems your 6th order filter is just making shit up as it goes along ... if it can make what looks like a flower out of what the 5th order filter thinks is a semicircle I think it's seeing more in the samples than is actually there.

PPS. not to say there might not be value in preserving local turbulence for algorithms which come after it, even if the exact shape can not be derived from the samples ... but that kind of thing is exactly what makes it unstable/unsuitable for moving images.
 
We don't generally call it de-aliasing, because that assumes such a thing is possible
Of course you can't do perfect/exact dealiasing, but then you can't draw a perfect line with square pixels, either.
PS. to me it seems your 6th order filter is just making shit up as it goes along
Nope. Those two screens are taken after letting the system evolve from the same initial condition for about 10,000 time steps. Since turbulence is chaotic, everything you do in your numerical algorithm significantly affects the time-evolution of the system. What you're seeing is

(G_n S)^(10,000) u_0, not

G_n u_0,

where S is the solution operator, G_n is the n-th order filter operator, and u_0 is the initial condition. Also, the filter in the second image is using edge-detection to turn off in places you dont want it to filter. See the waves in front of the shock in the top image? Those would be in the bottom image, too, if I weren't using edge-detection.

Here's some advice: If you don't work in a field, don't assume that what someone is doing is stupid/BS and then criticize it. Assume there's some reason for what you see and ask questions, like I did in my OP.
 
Fundamentally, the issue in antialiasing quality is not so much the resolve step (although you can do better than the box filter resolve that hardware has done for ages). You can't throw away those subpixel samples though - you are sampling a signal with "infinity" frequencies and if you don't do some element of subsampling, you will simply have triangles miss the grid altogether, and flicker on and off it as they move.

So certainly edge gradients can be improved over box filters (as has been well known for a long time), but that's hardly the most obnoxious artefact, unless you're sitting looking at static screenshots. Filtering/band-limiting the underlying geometry/shading signal itself is the interesting research these days, not the triangle edges.
 
MfA - the second one.

Andrew - By "box filter," do you mean U(i,j) = sum(u(i+I, j+J), I = -1 to 1, J = -1 to 1) / 9? (I hope the pseudo is clear.) Because that is indeed a pretty terrible filter, even for 2nd-order. IIRC, its transfer function actually turns negative past 1/2 your spectrum.

Filtering prior to sampling sounds like a neat idea.
 
Yes, equally weighted samples (i.e. the filter weight function looks like a "box"). And yeah, it's quite crappy, but it's cheap in hardware ;) I imagine people will start using slightly better ones in the next few years (MfA actually had a post on his blog recently about this I believe), but no amount of fanciness in the resolve step can reconstruct information that is entirely missing from the sample grid.
 
Also I don't have a blog, so I think he was confusing me with Nao on top of that :) (Both named Marco.)
 
Oh whoops, yep thanks for the correction! I just mis-remembered/confused your user names, which are somewhat similar (at least in my brain) :)
 
Yes, equally weighted samples (i.e. the filter weight function looks like a "box"). And yeah, it's quite crappy, but it's cheap in hardware ;)
Would it be significantly more expensive to use different weights? Because if you're using a 2nd-order filter, there are filters with much nicer spectral profiles.
I imagine people will start using slightly better ones in the next few years (MfA actually had a post on his blog recently about this I believe), but no amount of fanciness in the resolve step can reconstruct information that is entirely missing from the sample grid.
You'd be surprised how many people don't understand this who should. There are more than a few CFD researchers who claim their turbulence models do exactly that.

By the way, I successfully defended today, so I am now Dr. Fearsome. :D
 
Back
Top