mipmapping vs bilinear and anisotropic

Nick said:
Now, this is at the 'edge' of the Nyquist criterium, with a 2D signal that actually has infinite frequency (caused by the sharp color transitions). But the above is also true when sampling at slightly more than double the (finite) maximum frequency. Not convinced? Have a look at this:
You get similar things at every integer multiple of the "minimum" sampling frequency, though it does improve at higher frequencies.

Hence, I still believe 2x2 samples per texel is required to eliminate all aliasing effects.
Are you talking bilinear samples? Or single samples?

I believe the standard is one texel per pixel, which would be 2x2 samples per texel.
 
darkblu said:
nope. regardless of whether you use a perfect sinc filter or a humble triangle - you'd inevitably face the same phases alignment issue each time you try sampling at ~Nyquist threshold.
So, at what rate should we sample then to be able to reconstruct the exact signal?
 
Nick said:
One of the most important laws in sampling theory is that you can only reconstruct a signal (a texture is a 2D signal) when sampling at two times the maximum frequency of the signal. This is called the Nyquist rate.
If we have the correct filter shape (sinc) and the right reconstruction algorithm (DFT), then we can reconstruct the original signal. However, with current technology we use a triangular filter shape (bilinear) and our eyes (brain) to reconstruct the final signal.

Does it mean that DFT would be best reconstruction filter but too slow to do it in real time?
I heard about Nyquist criterium but never wandered how to reconstruct original signal from table of samples. I assume it has something to do with Fourier transform/inverse transform. I read that signal can be showed as sum of sinusoids with different frequencies and amplitudes - i guess maximum freq of signal is sinusoid with highest frequency? Could someone explain it in simple words?)(i'll have to study signal teory - understand half of what you say :oops: (i hope i understand half :) )
Thanks!
 
Nick,
I don´t understand why you would choose to sample at the positions you marked with green dots. Why not sample at top and bottom, this would even mean a lower sampling frequency giving a square signal out that should be "easily" filtered to close to original form.

Well, I guess I´m stupid :?
 
Nick said:
darkblu said:
nope. regardless of whether you use a perfect sinc filter or a humble triangle - you'd inevitably face the same phases alignment issue each time you try sampling at ~Nyquist threshold.
So, at what rate should we sample then to be able to reconstruct the exact signal?

i'm not sure, Nick, i'm not an expert in this area, i lack formal traning in it, just some field experience. but you can handle that worst case you originaly described (where the phase of the highest freq gets shifted by pi radians, coming out of the sampler as a mere DC) by 4x sampling (that 2x2-to-1 lod biasing you experimantally came to), can't say though if at certain phases 4x would not produce parasitic lower frequences (freqs that were not originally there), ala moire.

UPO said:
Does it mean that DFT would be best reconstruction filter but too slow to do it in real time?
I heard about Nyquist criterium but never wandered how to reconstruct original signal from table of samples. I assume it has something to do with Fourier transform/inverse transform. I read that signal can be showed as sum of sinusoids with different frequencies and amplitudes - i guess maximum freq of signal is sinusoid with highest frequency? Could someone explain it in simple words?)(i'll have to study signal teory - understand half of what you say (i hope i understand half )
Thanks!

i'd recommend to you a great text - The Scientist and Engineer's Guide to Digital Signal Processing by Steven W. Smith. the book's site is http://www.DSPguide.com. this text has the answers to all your questions, so far ; )
 
rubank said:
Nick,
I don´t understand why you would choose to sample at the positions you marked with green dots. Why not sample at top and bottom, this would even mean a lower sampling frequency giving a square signal out that should be "easily" filtered to close to original form.

Well, I guess I´m stupid :?
The idea is that you are chosing some specific sampling frequency, and you know nothing about the incoming data.

In the graph shown, the incoming data just happened to be the wrong frequency for proper sampling.

What the graph shows is that sampling at twice the frequency of the incoming data is not going to be enough in all cases.
 
Nick said:
darkblu said:
nope. regardless of whether you use a perfect sinc filter or a humble triangle - you'd inevitably face the same phases alignment issue each time you try sampling at ~Nyquist threshold.
So, at what rate should we sample then to be able to reconstruct the exact signal?
I would say that the best way to avoid "ringing" would be this (using audio as an example):

1. Digitize the incoming data using two different sampling frequencies. For example, maybe 44KHz and 48KHz.
2. Fourier transform both digitized data streams.
3. Scan both signals for fourier components that appear in one data stream (in the audible range), but are much weaker in the other.
4. If such a signal is detected, calculate what the appropriate "ringing" lower-frequency signal would be in the other data set. If that lower-frequency signal is there, remove it (if it is in the audible range), and strengthen the higher-frequency signal (such that after the average it is as strong as it should be).
5. If no lower-frequency ringing signal is detected, eliminate this component. It is assumed to be a lower-frequency "ringing" signal that should not be there.
6. Average the fourier components of both data streams.

The output format would be a fourier series, or something similar. This is, of course, a fair amount of processing to do, but I don't see any reason why modern DSP's can't handle it in realtime.
 
Nyquist theorem
I wouldn't say that the differences are because bilinear is very bad, but rather because a sinc is very good, and Nyquist theorem is very theoretical.

Nyquist theorem talks about the relation between a time continuous signal and a sampled version of it (time discrete), both of them infinite in length. So if you want to reconstruct the sin(x) in Nick's image, you'd need more samples of it. It doesn't have to be sampled with a higher frequency, but can instead be sampled for a longer time. If you do that, you'll se that the signal got "beatings", places where samples are in phase and show a high amplitude, and other places where the samples are out of phase and just show the mean value.

Now it's time to use the mathemagical function "sinc", which is defined as sinc(x)=sin(x)/x (or sometimes sin(pi*x)/(pi*x)). Note that the envelope of the "tails" of that function is abs(1/x). The integrale of abs(1/x) from N to inf is infinite, and that hints that when reconstructing with a sinc, it can "collect information" from samples very far away from the point you're reconstructing. And in Nick's example (but sampled longer time), it will collect information from the areas where the sin(x) is more visible, and it will be able to reconstruct the sin(x) perfectly.
So as long as the frequencies in the signal is below the Nyquist frequency, it can be reconstructed, including the phase.

But that's all very theoretical, since practically we can't use a filter with infinite extent. And even if we had the computing power, would we want to? - I don't think so.

Nyquist theorem requires the input signal to be perfectly bandwidth limited before the sampling, and that's not the case with our textures. They most likely have some frequency components above the Nyquist frequency, so a sinc won't do the reconstruction anyway.
And what is there that say that reconstructing all frequency components up to a certain frequency perfectly, and cutting out the frequencies above completely, produces a visually pleasant image? If you've seen how a perfectly low pass filtered square wave looks (lots of ringing at the edges), you'd probably agree that a simpler filter could give it a much nicer look.

So Nyquist theorem, and the idea to get all frequencies up to a point and nothing above, is theoretically possible and a nice thumb rule for what you want. But it's not neccesarily the optimal solution, even when just optimizing for quality.



What about the phasal alignment problems (that darkblu talks about)?
It's of course very true for filters with smaller extent than a sinc(x), like bilinear. And then the out-of-phase areas will be much like a low pass filtered version of the texture. Or in other words, approximately like next mipmap but without the gamma correction when downsampling.

One solution is to convert to linear form before the bilinear filter, keep it in linear form through the PS, and gamma convert when it's stored in frame buffer (like Xmas said). The floating point in PS is plenty good to keep the precision until it's stored in the frame buffer. (I think you were a bit to negative Nick. It's of course nice to have a high precision frame buffer for fog/dust/smoke/HDR, but that's not directly related to the problem we're talking about now.)

Or you could introduce compact floating point textures, like a R9G9B9E5 format where E5 refers to a 5 bit block exponent. That way you could keep textures in linear format, and even get some HDR in there too.



What mip level is actually used?
I made a test with Xmas Texture Filter TestApp. (The original of the various AF testers floating around?) My R300 doesn't just go below 2x2 pixels per texel, it even goes slightly below 1x1 pixel per texel in trilinear. In bilinear filtering it of course vary over the mip band, and I've seen it as low as 0.7x0.7 pixels per texel.

I also tested how much lod bias I needed to get rid of all visible aliasing.
Gamma set to 2.2 (framebuffer in linear format). Texture #0 for maximal torture. 16xAF.
With negative LOD bias there's some massive moiré, most of those patterns is gone at bias 0.0, but there's some left to around 0.2. After that, there's some moiré left that won't disappear until LOD bias 1.3 though (espacially if you rotate the texture).



Doh! Another essay. Sorry 'bout that.
 
Basic said:
After that, there's some moiré left that won't disappear until LOD bias 1.3 though (espacially if you rotate the texture).
So, if I got this right then it corresponds with one texel covering 2.5x2.5 pixels? That's even worse than I expected. An interesting question is, does it require 3x3 samples per pixel to avoid artifacts at LOD 0? It's a very similar situation but just averaging 9 pixels together isn't completely corrrect either...
 
Basic said:
So as long as the frequencies in the signal is below the Nyquist frequency, it can be reconstructed, including the phase.

ok, Basic, one question: how would you reconstruct a highest frequency signal from a texture sampled at 1-to-1 screen mapping, positioned to the screen in such a manner that the signal under consideration is phase-shifted by pi/2 radians with respect to the screen pixels centre? essentially what Nick describes here:

Nick said:
Imagine a checkerboard. If you line up the pixel centers exactly with the texel centers, you get the checkerboard on screen. But when you shift the polygon half a texel in the x and y direction, the bilinear filter averages two black and two white texels, resulting in gray. This happens for all pixels, so you get a big gray square. All the information about alternating black and white fields is lost!

note that the above-described signal satisfies the Nyquist criterium (its period is 2 texels in x/y directions), and for the sake of the argument let's assume you can use whatever convolution kernel you feel fit, including infinite sinc.
 
darkblu:
You can't reconstruct signals at the Nyquist frequency (half the sample frequency) unless it's in phase. But for signals ever so slightly below it, you can (using a sinc). I'm sorry if I was so sloppy that I said you could reconstruct signals exactly at the Nyquist frequency.

A signal exactly at the Nyquist frequency does not satisfy the Nyquist criterium.
 
Nick said:
Basic said:
But the maximum frequency in the signal is half the sampling frequency of the signal. So with a perfect reconstruction function, each texel should cover at least 1x1 pixels. But since bi-/trilinear hardly is perfect, it might be good to use a lower res.
Dang, you're right. :idea: I always blamed it on the sampling frequency while it's actually the filter quality. Anyway, through experimentation I've noticed that 2x2 pixels per texel is required to avoid any artifacts (motion aliasing). So bilinear must be really bad then.

Nyquist's theory also assumes that you are reconstructing with a sinc function and so using linear filters only makes matters worse.
 
Simon F said:
Nyquist's theory also assumes that you are reconstructing with a sinc function and so using linear filters only makes matters worse.
Well that's exactly what I meant with filter quality. But the rest of the discussion still makes me wonder if using a sinc filter would solve all problems. I guess not, but then I'm confsed about the purpose of Nyquist's cirterium. Well it's clear we shouldn't go below it, but above it we still can't construct the signal?
 
Back
Top