High-order realtime image filtering

Would it be significantly more expensive to use different weights? Because if you're using a 2nd-order filter, there are filters with much nicer spectral profiles.
Different weights are not expensive, but wider filters require more samples (obviously). And of course wide non-separable filters are the worst. But overall, no I don't think it's a huge deal going forward to use a somewhat wider filter.

By the way, I successfully defended today, so I am now Dr. Fearsome. :D
Congrats!
 
Different weights are not expensive, but wider filters require more samples (obviously). And of course wide non-separable filters are the worst. But overall, no I don't think it's a huge deal going forward to use a somewhat wider filter.
What do you mean by non-separable filter?

If you're using 2nd-order filter, in a PDE context, your ideal weights are [1/4 1/2 1/4], because you always want to fully attenuate the high end of the spectrum. I would think you'd want to do the same in an image context, since sparklies and whatnot occur due to phenomena only one or two pixels wide. We always apply filters linearly in multiple passes, one for each coordinate axis. If you look at the transfer function of [1/3 1/3 1/3] (the box filter), it's really ugly. But 2nd-order filters in general are pretty bad. Of course, since you're filtering an image rather than a PDE solution, you can get away with more dissipation.

A decent 4th-order filter uses the weights [-1/16 1/4 5/8 1/4 -1/16]. I have a bunch more in some of the papers I've used lately.

I should apply some of these to some screens of non-AA'd video game graphics and see what comes up. Maybe it will be good, or maybe it will be crap.
 
Like Andrew says more samples is obviously more expensive than less samples, but I've also noticed that just performing a custom resolve at all (as opposed to using an API call to let the driver do it) has an associated performance penalty. I'd suspect that the driver is able to make use of hw-specific optimizations when performing the resolve, although I've not yet delved too deeply into the performance side of things.
 
Last edited by a moderator:
What do you mean by non-separable filter?

Filter kernels that aren't radially symmetrical when convolved separately in each dimension. Mitchell referred to the resulting artifacts as "anisotropic effects" in his paper regarding cubic spline filters for image rescaling.

I should apply some of these to some screens of non-AA'd video game graphics and see what comes up.

It's probably not a great test case. You really want to apply the filter on oversampled data rather than the final output samples. Plus with screenshots you can only work with the post-tonemapping colors that are presented to the display, as opposed to the HDR values (which represent the physical amount of light reaching the eye/sensor) that are sampled prior to post-processing. Filtering one is not equivalent to filtering the other, since there are several non-linear transformations that occur when going from HDR to display.
 
Ah. All the filters I've seen used in my field have that property, which is probably why I'm not familiar with that term. I'm guessing non-separable filters have nice properties for use in the context of computer graphics.
 
I'd suspect that the driver is able to make use of hw-specific optimizations when performing the resolve, although I've not yet delved too deeply into the performance side of things.
Right, usually related to MSAA compression info (i.e. it knows when all the samles are equal, etc).
 
Back
Top