Contrast mipmapping

Simon F said:
I suggest reading Charles Poynton's excellent Gamma FAQ.

It is indeed a good FAQ. But it does have one (major) error.

Q13 How is gamma handled in video, computer graphics and desktop computing?

It says that "Computer Graphics" store a value proportional to intensity in the frame buffer. While that certainly is possible, it doesn't seem to be the standard. The description of "Video" fits better to CG. That's how ATIs gamma corrected AA assumes the framestore is. (I don't remember their exact gamma value, but at least the idea is the same.)

I did however sometimes have my gamma tuned as described for "Computer Graphics" on my GF2. This give you correct AA without any need for gamma correction in the AA calculation. But it gives bad precision for dark colors, and if the graphics content isn't made for it, the colors will be off.
 
Ingenu said:
So which way of generating MIP maps do you recommand ?
Game Developer magazine had an excellent article a couple of years ago about mipmap generation - using an image processing technique and gamma correction to produce much better looking mipmaps.
 
Dio said:
Ingenu said:
So which way of generating MIP maps do you recommand ?
Game Developer magazine had an excellent article a couple of years ago about mipmap generation - using an image processing technique and gamma correction to produce much better looking mipmaps.

That's one possible approach - the old SGL library allowed you to auto generate the maps using a Fourier Transform but I suspect that it's not actually ideal because the reconstruction of the data doesn't use a sinc function. Funnily enough, I discussed something related to this at Graphics Hardware a few weeks back. The ideal FIR down filter when you're reconstructing with a (bi)linear function looks a bit like a sinc but with straight line segments. I don't think it's worth the effort though as, with sincs, you need a lot of taps for it to work well.

Instead, I recommended using linear wavelets where you throw away the higher frequency terms, as it's much much cheaper and works well.
 
Ah, well, the fact that I actually passed my Fourier theory course at university is still somewhat of a mystery to me, as I don't understand a word of it...
 
Dio said:
Ah, well, the fact that I actually passed my Fourier theory course at university is still somewhat of a mystery to me, as I don't understand a word of it...

well, the story behind it is quite amusing

The Scientist and Engineering Guide to DSP said:
Fourier analysis is named after Jean Baptiste Joseph Fourier (1768-1830),
a French mathematician and physicist. (Fourier is pronounced: , and is for@¯e@¯a
always capitalized). While many contributed to the field, Fourier is honored
for his mathematical discoveries and insight into the practical usefulness of the
techniques. Fourier was interested in heat propagation, and presented a paper
in 1807 to the Institut de France on the use of sinusoids to represent
temperature distributions. The paper contained the controversial claim that any
continuous periodic signal could be represented as the sum of properly chosen
sinusoidal waves. Among the reviewers were two of history's most famous
mathematicians, Joseph Louis Lagrange (1736-1813), and Pierre Simon de
Laplace (1749-1827).
While Laplace and the other reviewers voted to publish the paper, Lagrange
adamantly protested. For nearly 50 years, Lagrange had insisted that such an
approach could not be used to represent signals with corners, i.e.,
discontinuous slopes, such as in square waves. The Institut de France bowed
to the prestige of Lagrange, and rejected Fourier's work. It was only after
Lagrange died that the paper was finally published, some 15 years later.
Luckily, Fourier had other things to keep him busy, political activities,
expeditions to Egypt with Napoleon, and trying to avoid the guillotine after the
French Revolution (literally!).

Simon F said:
Instead, I recommended using linear wavelets where you throw away the higher frequency terms, as it's much much cheaper and works well.

hmm, i'm wondering what would be the chances that you actually had a sample image of what you were talking about that would be suitable for showing before closed circles such as this one?
 
Well, when I presented my research, there were graphics experts from universities and the leading graphics companies there (@GH2003) and they didn't throw rotten fruit at me, so I can't be too far off :)

I didn't do signal processing nor the Fourier transform* at Uni (*instead we did a similar transform in the Pure Maths Honours stream... of which I can remember precisely log(1)), however I've slowly been trying to catch up with the theory at work.

(AFAIU according to Nyquist etc) If you have a correctly-bandwidth limited signal that is then sampled, then to reconstruct that original signal correctly you must use weighted sums of sinc functions.

If however there are any components over the frequency cut-off you must filter them out before sampling. One such way would be to use a fourier transform, followed by a box cut-off, followed by the inverse F transform. The equivalent, mathematically, is to to perform a convolution [hope that's the correct term] of a sinc (scaled appropriately in X and Y) with the original signal to get the frequency limited result. If we assume our signal is already discrete (i.e. sampled as is a texture top-level map) but at a higher sample rate, then you can just use a weighted sum of values to generate each output result.

With Bilinear filtering, we are clearly reconstructing a signal using linear segments, not a sinc function, and so the above is not strictly applicable anymore. I did some analysis to find the down-filter with the lowest least squares error result when linearly upscaled again and it looks vaguely like a sinc but with straight segments between turning points.

Unfortunately, this research was not in the paper itself, only in the slides I presented. I might have that online in the near future.
 
Simon F said:
Well, when I presented my research, there were graphics experts from universities and the leading graphics companies there (@GH2003) and they didn't throw rotten fruit at me, so I can't be too far off :)

don't say you left things at fate's mercy and didn't actually buy out all rotten fruit from the markets in a 1-mile radius the day before! ;)

If however there are any components over the frequency cut-off you must filter them out before sampling. One such way would be to use a fourier transform, followed by a box cut-off, followed by the inverse F transform. The equivalent, mathematically, is to to perform a convolution [hope that's the correct term] [ed: it is] of a sinc (scaled appropriately in X and Y) with the original signal to get the frequency limited result. If we assume our signal is already discrete (i.e. sampled as is a texture top-level map) but at a higher sample rate, then you can just use a weighted sum of values to generate each output result.

With Bilinear filtering, we are clearly reconstructing a signal using linear segments, not a sinc function, and so the above is not strictly applicable anymore. I did some analysis to find the down-filter with the lowest least squares error result when linearly upscaled again and it looks vaguely like a sinc but with straight segments between turning points.

that is the inflexions or the peaks?

Unfortunately, this research was not in the paper itself, only in the slides I presented. I might have that online in the near future.

that would be real interesting indeed. thanks in advance on behalf of all inquiring minds(tm)
 
darkblu said:
Unfortunately, this research was not in the paper itself, only in the slides I presented. I might have that online in the near future.

that would be real interesting indeed. thanks in advance on behalf of all inquiring minds(tm)
OK the slides are now available on the GH2003 website: They're on the presentations page.
 
Isn't sonix666's idea similar to an option in Serious Sam? I remember playing around with a texture setting that sharpens the mipmaps. All I remember though is that it made bilinear anisotropic filtering look real bad.

Does anyone know what I'm talking about?
 
If I remember correctly the problem with applying a brickwall cut-off to the results of a DFT is that the continuous frequency response of the equivalent filter will wildly fluctuate in between the centers of the DFT bins. Which in the spatial domain is probably represented by lots and lots of ringing. In audio you solve this by tweaking the phase response to fit your amplitude spectrum to make it a minimum phase filter, but I dont think that works well in images :)

Brick wall zero phase filters cannot be implemented, and their naive approximation tends to be a poor choice perceptually.
 
Back
Top