Anisotropic Filtering and LOD Bias

andypski,

I fully agree that with a tent filter Nyquist's rate doesn't fully apply. But what's the 'correct' number of samples then? As far as I know, there should be an infinite number of samples to avoid aliasing with this filter. So that doesn't make sense. What does make sense is to regard a tent filter as being a good approximation of a sinc filter. And with this assumption, we can perfectly specify that the minimum number of samples should be in accordance to Nyquist's rate.

You simply have to draw the line somewhere. And as far as I can tell from other people's description, keeping very close to that line works out fine for NVIDIA. Crossing the line, by using a negative LOD bias, causes problems, but that's perfectly according to the math. Once you go below Nyquist's rate, no matter if you use a sinc or tent filter, you get aliasing.

If you really want better sampling than what NV40 does, I think a more effective approach is to improve the filter shape. The advantage of using more samples is very minimal. And if more samples is what you want, then you need super-sampling horizontally and vertically.
 
Nick said:
andypski,

I fully agree that with a tent filter Nyquist's rate doesn't fully apply. But what's the 'correct' number of samples then? As far as I know, there should be an infinite number of samples to avoid aliasing with this filter. So that doesn't make sense. What does make sense is to regard a tent filter as being a good approximation of a sinc filter. And with this assumption, we can perfectly specify that the minimum number of samples should be in accordance to Nyquist's rate.
This is certainly the crux of the issue - there is no 'correct' number of samples per-se - instead a detail level is chosen that is thought to be a good tradeoff between aliasing and blurring.

As such the problem becomes that, if we all recognise that an issue exists due to the imperfections of the filter kernel then making an 'optimisation' that reduces the number of samples compared to another piece of hardware will result in your aliasing getting worse compared to that piece of hardware. This may work out well for you in terms of performance, but can hardly be regarded as a superior solution in terms of quality. You can't trim the number of samples closer to a line that you've actually already crossed, if you see what I mean, you just end up further from the line in the wrong direction.

You simply have to draw the line somewhere. And as far as I can tell from other people's description, keeping very close to that line works out fine for NVIDIA. Crossing the line, by using a negative LOD bias, causes problems, but that's perfectly according to the math. Once you go below Nyquist's rate, no matter if you use a sinc or tent filter, you get aliasing.
As I said, the key problem is that the line is already crossed even by hardware that is taking a more conservative approach than that apparently used by NV40 - so you can't get closer to the 'correct' line without taking _more_ samples, not less.

So if a new tradeoff point has been chosen, does it result in a better or worse user experience? Is it gaining enough performance to compensate for the increased aliasing, or would it be better to be more conservative, given the high performance of current hardware? What works out better for the users, rather than the IHV?

Difficult questions to answer.

If you really want better sampling than what NV40 does, I think a more effective approach is to improve the filter shape. The advantage of using more samples is very minimal. And if more samples is what you want, then you need super-sampling horizontally and vertically.
No argument from here about wanting to improve the sampling, but that's a question for the future. It doesn't help us with the problem of suddenly needing to clamp the LOD now on some applications, where no such strong need apparently existed in the past.
 
I gather, then, that by optimzing (lessening the work load), and thus the very facet (oversampling) that allows an already approximated model to approach the reference the results, one only diverges from it more and it lessens image quality for the consumer.

Very strong and interesting argument, however, it seems Nvidia was not very idealistic about accurately approximating a hardwired filtering model when attempting to save transistor counts for other things like SM 3.0. Tradeoffs are very difficult to qualify, as they usually either benefit the developer (SM 3.0) or the consumer (IQ), although there are some in-betweens like pipeline counts, shader units, etc.
 
A sinc filter is far better at reconstruction than a linear filter
Andy you mentioned a sinc filter, what are the chances of seeing such an improvement in future ATI hardware and can you speculate on the performance difference between Linear and Sinc filters? I apologize for my ignorance.
 
andypski said:
Difficult questions to answer.
Absolutely.

I would like to conclude with saying that the NV40 hardware is most likely not 'broken' in the sense that it doesn't comply with specifications. These specifications have been known for many years, and they just chose to maximize performance at the risk of loosing some over-sampling quality and requiring adjustments at the application or driver level.

Frankly I'm sure that most consumers don't really care. They will regard the LOD bias clamp option as a real fix. It does prevent applications that use negative LOD biases on low frequency textures to get the desired result, so in the long term this fix has to be applied at the application level. Since perfect filtering doesn't exist anyway, the specified quality has to suffice.

We can call it agressive optimization, and whether that's good or bad is subjective. If it really bothers anybody, get an X800 before they do it in R500 as well. ;)
 
Khronus said:
Andy you mentioned a sinc filter, what are the chances of seeing such an improvement in future ATI hardware and can you speculate on the performance difference between Linear and Sinc filters? I apologize for my ignorance.
The performance difference would be infinite - a sinc filter is infinitely wide, which equates to needing an infinite number of samples. Of course, after a while the contributions do become negligible. :)

There will be improvements to texture filtering in the future - we don't need to go all the way to the ideal to do better than we are today, and at some point we can do well enough that the difference from the ideal will be negligible.
 
OpenGL guy said:
Your "logic" is non-sensical. "A looks worse than B, therefor A is more accurate."

The NV40 does accurate (i.e. mathematically correct) isotropic filtering. The resulting filter is more isotropic than the one ATI uses (as shown by test programs), but ATI gets a slight gain in IQ, even though their isotropic filtering is not perfectly symmetric.

There are two issues here. 1) how close you are to the mathematical reference and 2) how good the mathematical reference looks in the first place

3D graphics is about choosing your cheats. Sometimes cheats produce better looking images even thought they are not the physically accurate or mathematically perfect ones.
 
DemoCoder said:
OpenGL guy said:
Your "logic" is non-sensical. "A looks worse than B, therefor A is more accurate."
The NV40 does accurate (i.e. mathematically correct) isotropic filtering. The resulting filter is more isotropic than the one ATI uses (as shown by test programs), but ATI gets a slight gain in IQ, even though their isotropic filtering is not perfectly symmetric.
Thank you for repeating exactly what I said earlier.
There are two issues here. 1) how close you are to the mathematical reference and 2) how good the mathematical reference looks in the first place

3D graphics is about choosing your cheats. Sometimes cheats produce better looking images even thought they are not the physically accurate or mathematically perfect ones.
So what are you contributing to this discussion? I explained how the NV40's improved isotropic LOD calculation should be affected by LOD bias. So far, no one has given be any reason to believe that my logic is wrong.
 
OpenGL guy said:
So what are you contributing to this discussion? I explained how the NV40's improved isotropic LOD calculation should be affected by LOD bias. So far, no one has given be any reason to believe that my logic is wrong.
Okay, I'll bite. Your claim was that due to the more accurate LOD calculation algorithm enabled by the NV40 (it's more circular), the NV40 will be blurrier on off-angles.

I claim that this is not the case. When you have a chip that has an inaccurate LOD calculation, you really need to have the scale set by the sharpest point. That is, an inaccurate LOD selection algorithm should only blur the places where the inaccuracies occur. Said another way, an inaccurate LOD selection algorithm should not result in extra aliasing, just extra blurring (going on the fact that aliasing is more noticeable).

So, the more accurate LOD calculation of the NV40 should, when anisotropic filtering is not used, allow the NV40 to "ride the line" more closely, making the calculated LOD lower than other chips. This may explain the tendency towards texture aliasing that the NV40 has, but it doesn't explain how that doesn't go away with anisotropic filtering enabled. So, I therefore suspect the aliasing issues are related to an incomplete implementation of an asymmetric filtering algorithm (i.e. uses different approximations for the same calculation seen in LOD and anisotropic).
 
Chalnoth said:
OpenGL guy said:
So what are you contributing to this discussion? I explained how the NV40's improved isotropic LOD calculation should be affected by LOD bias. So far, no one has given be any reason to believe that my logic is wrong.
Okay, I'll bite. Your claim was that due to the more accurate LOD calculation algorithm enabled by the NV40 (it's more circular), the NV40 will be blurrier on off-angles.

I claim that this is not the case. When you have a chip that has an inaccurate LOD calculation, you really need to have the scale set by the sharpest point. That is, an inaccurate LOD selection algorithm should only blur the places where the inaccuracies occur. Said another way, an inaccurate LOD selection algorithm should not result in extra aliasing, just extra blurring (going on the fact that aliasing is more noticeable).

So, the more accurate LOD calculation of the NV40 should, when anisotropic filtering is not used, allow the NV40 to "ride the line" more closely, making the calculated LOD lower than other chips. This may explain the tendency towards texture aliasing that the NV40 has, but it doesn't explain how that doesn't go away with anisotropic filtering enabled.
Everything you said would be great if it was correct.

If you compare NV40's isotropic LOD to NV3x's, you'll see that they agree at multiples of 90 degrees. At 45 degrees, the NV3x will be a little sharper because it doesn't maintain a circular LOD (it pinches inward).

This means that at 45 degrees the NV40 is changing to the less detailed mipmap sooner and is therefore computing a higher LOD. As I said before, this is perfectly ok.

This behavior was also seen in 3DMark03 where someone used colored mipmaps and saw that the NV40 was changing to the less detailed mipmap a little sooner than other chips.

Stop basing your arguments on your preconceived notions of what "better" and "more accurate" mean. If you base them on what is actually testable you'll go further.
So, I therefore suspect the aliasing issues are related to an incomplete implementation of an asymmetric filtering algorithm (i.e. uses different approximations for the same calculation seen in LOD and anisotropic).
This may actually be the case, but still doesn't give me any reason to see the "superiority" of NV40's algorithm!
 
OpenGL guy said:
This may actually be the case, but still doesn't give me any reason to see the "superiority" of NV40's algorithm!
Did I ever say it did?

Anyway, I'm going to need to do some looking for some LOD shots to check what you're saying, but not right now. If you want to find 'em for me, be my guest....
 
Randell said:
NV40 behaviour in Bf:1942 and CoH is far far worse than my 9700Pro or 8500, the moire on textures is far worse e.g. floor grills in D3. And both shimmering and moire get worse as AF is increased not better. Increasing LOD to 0.5 and using SSAA isn't enough either.
Although i agree that NV40 has some excessive texture noises problem, you are definately wrong with moire in Doom 3. I see the same moire on 6800u, rx800p, gf4ti (!) and even r9200. So it's not the hardware, it's the software in this case.

Interestingly enough, when i first discovered this noise problem on NV40 (in Max Payne 2), i tried to set a positive texture LOD and found that this doesn't help - textures got blurrier but noises were still very visible. What's interesting is that X800 had the same problem though not that visible as 6800. And this problem in MP2 was fixed by NV _and_ ATI in the latter drivers...
 
Wow you guys are amazing. You managed to contain jimmyjames and radar's bs to only the first page. Keep up the good work.
 
Back
Top