Anisotropic Filtering and LOD Bias

andypski said:
Please go and tell Tim Sweeney that his programming is bad, since UT2003 uses a negative LOD bias on some textures by default - maybe he did this for no reason and simply decided to break his application, but I suspect that he actually wanted it that way.
I'll tell him next time I see him. I'm quite sure these LOD biases were determined experimentally, most probably with hardware that has a high tolerance for negative LOD biases (i.e. it takes more samples than strictly required to avoid aliasing). When using hardware that uses a stricter formula for adaptive anisotropic filtering, there is no other fix than to clamp these negative LOD biases.

I'm sure it wasn't Tim's intention to cause shimmering.
An application is perfectly within its rights to use a negative LOD bias if it so chooses, otherwise there would not be provision for negative LOD biases in the spec. It's not an application problem, and the same applications basically cause no real problems on ATI hardware - they are behaving within the API specification.
Absolutely. And so is NVIDIA hardware. So it can only be an application problem. The ignorance of the game developers is very easily forgiven, and I'm sure all next-generation games won't have this prolem. For now, the only way to correct it is the LOD bias clamp in the driver.

Another way to fix this is to define an 'oversampling' quality setting. However, to my knowlegde this has exaclty the same effect as using a LOD bias closer to zero and correct adaptive anisotropic filtering. In this sense, NVIDIA's approach is more optimal, and flawless.
 
Nick said:
andypski said:
It is perfectly legal for an application to set a negative LOD bias...
Absolutely. But you can't expect that not to cause shimmering. Set your LOD bias to -1 and it will definitely not look good on ATI chips either. NVIDIA just uses a lower tolerance, for good reason.
This comment can't be serious. What good reason would that be? Didn't Andy already explain cases where negative LOD may be desirable?
...clamping that bias to 0, however, is not legal.
Who's talking about legal here? Any method to fix what an application does wrong is in my eyes a good thing. And it's optional. If you like the effect a negative LOD bias gives, by all means, disable the clamping.
Why blame the app? Does this problem occur on GeForce 4 chips? What about FX? If not, then I don't think the problem lies in the app.
It's removed some artifacts at the expense of breaking the behaviour of the LOD bias control.
It doesn't. Negative LOD bias gives you shimmering on NVIDIA and ATI chips. One is just more sensitive than the other, but not incorrect.
Hold it. If you set a certain LOD bias, then there is a mathematical outcome that one expects. If it's a positive bias, then you should expect things to get blurrier by a certain amount. If it's a negative bias, then you should expect things to get sharper by a certain amount. If one platform is showing far more aliasing with negative biases than is expected based on the mathematics, then that sounds like a platform issue.
Well, that's great - this mysterious 'possibly more accurate' formula whose existence is impossible to prove or disprove results in totally unnecessary shimmering when minor levels of negative LOD are applied. That sounds like a broken formula rather than a more accurate one to me.
I suggest reading my explanation again why this happens. If I'm correct then NVIDIA's anisotropic filtering is better optimized, not broken. ATI's implementation is less optimized but has the advantage of showing less artifacs when the application is badly written.
So now you're an expert on ATI and nvidia's AF implementations? Please explain.
You could always render at 3200x2400 resolution and downsample that to 800x600.
This has nothing to do with the current discussion.
There won't be any shimmering at all, but this just isn't efficient. I'm sure that ATI wished they had the same optimization so their performance was a few percent higher. And they would gladly add the fix to keep the LOD bias positive.
Ri-i-i-ight. What optimization are you referring to? Personally, I think you are making all of this up on the fly.
ATI hardware must be doing really well to have such good comparative performance with anisotropic filtering while at the same time apparently taking more samples than are strictly necessary to avoid aliasing.
Given the higher clock frequency and less features this isn't much of a surprise to me.
What a retort! "I can't refute what he says so I'll just ignore it and insult the competition instead!"
Of course they do - as the LOD bias becomes more negative, aliasing will gradually increase, however our hardware seems to have no need for this negative bias clamping, and our LOD bias control works in accordance with the D3D specification.
Coincidentally these mentioned games use a negative LOD bias that doesn't cause bad effects on ATI chips. Still, NVIDIA's method is totally in accordance with DirectX specifications.
Is it now? As I stated above, all of this stuff is controlled by mathematics. If your chip is not adhering to the proper formulas, then it should be considered broken.

There are formulas where you are allowed some leeway, and the LOD calculation happens to be one of them. (See the OpenGL specs for details.) However, there are limits to how much leeway you get.

If the older GeForce 4 and FX chips don't exhibit this problem, why should we all think that the 6800 is somehow better? (Obviously I am only referring to LOD calculations and not other features.) Mythical "optimizations" that you dream up are of no interest.
 
Besides, as noted before, ATI chips also show some shimmering, just less noticable. This is totally in accordance to my theory of NVIDIA using a more accurate formula. And most of all it clearly shows that it is indeed an application problem.
 
Nick said:
Absolutely. But you can't expect that not to cause shimmering. Set your LOD bias to -1 and it will definitely not look good on ATI chips either. NVIDIA just uses a lower tolerance, for good reason.
There is no proof here that this is a tolerance issue, or that it's for any good reason at all - you have speculated into existence some notionally superior calculation that takes less samples, and then said that when it goes wrong by aliasing heavily that this is a result of its superiority. You have offered no proof whatsoever that ATI hardware isn't taking exactly the right number of samples and that NV40 is actually taking too few and this is the cause of the aliasing. This explanation is just as possible, but you have chosen to assume that nVidia's calculation is superior despite the apparently inferior results with negative LOD bias applied - dubious logic.

Who's talking about legal here? Any method to fix what an application does wrong is in my eyes a good thing. And it's optional. If you like the effect a negative LOD bias gives, by all means, disable the clamping.
Except that the application really isn't doing anything wrong per-se, there is nothing fundamentally wrong with using negative LOD biases.
It's removed some artifacts at the expense of breaking the behaviour of the LOD bias control.
It doesn't. Negative LOD bias gives you shimmering on NVIDIA and ATI chips. One is just more sensitive than the other, but not incorrect.
How do you know it is not incorrect to have such severe aliasing such that you can make such a categorical statement? You're making an assumption, not dictating divine wisdom. You have two pieces of hardware, and one is showing worse artifacts than the other - you can't simply state that nothing is wrong and have it accepted as truth.
I suggest reading my explanation again why this happens. If I'm correct then NVIDIA's anisotropic filtering is better optimized, not broken. ATI's implementation is less optimized but has the advantage of showing less artifacs when the application is badly written.
If you're correct, then yes, but assuming that something is better when it apparently produces worse results is a really strange position to take. Why is it not just as likely (if not more so) that the implementation that is showing severe aliasing is worse but that with normal LOD bias you just don't notice the inadequate number of samples as often?

You could always render at 3200x2400 resolution and downsample that to 800x600. There won't be any shimmering at all, but this just isn't efficient. I'm sure that ATI wished they had the same optimization so their performance was a few percent higher. And they would gladly add the fix to keep the LOD bias positive.
You're so right - many's the time I've thought to myself "I wish we'd set up our anisotropic filtering so that it aliases really badly, then I could hack the LOD bias to be positive all the time"

Coincidentally these mentioned games use a negative LOD bias that doesn't cause bad effects on ATI chips. Still, NVIDIA's method is totally in accordance with DirectX specifications.
I don't think there's anything coincidental about games that use mild negative LOD biases not aliasing horrendously on our hardware.
 
OpenGL guy said:
This comment can't be serious. What good reason would that be? Didn't Andy already explain cases where negative LOD may be desirable?
That good reason would be performance. If two samples suffice according to sampling theory then you don't have to take four.
Why blame the app? Does this problem occur on GeForce 4 chips? What about FX? If not, then I don't think the problem lies in the app.
As far as I know the Geforce 4 doesn't use adaptive anisotropic filtering. Hence it uses more samples than required. This causes a negative LOD bias to still look good, while it doesn't have to.
Hold it. If you set a certain LOD bias, then there is a mathematical outcome that one expects. If it's a positive bias, then you should expect things to get blurrier by a certain amount. If it's a negative bias, then you should expect things to get sharper by a certain amount.
Oh it does look sharper, as expected. You just get aliasing on top.
If one platform is showing far more aliasing with negative biases than is expected based on the mathematics, then that sounds like a platform issue.
For adaptive anisotropic filtering, it is perfectly "expected based on the mathematics" that negative LOD biases gives aliasing. The mathematics are very simple. First you calculate a mipmap level and a required number of samples to completely avoid aliasing. Then, this mipmap level is biased with a certain value. With any negative bias it is no longer guaranteed that this won't cause aliasing. Period.
So now you're an expert on ATI and nvidia's AF implementations? Please explain.
I never claimed this. I think I was even very careful when explaining my theory. It hasn't been broken yet.
This has nothing to do with the current discussion.
Oh yes it does. If you always take too many samples, you'll never have aliasing. But you also wouldn't have a competitive product. Apparently NVIDIA chose to walk the thin line of using just enough samples, to maximize performance. As soon as the LOD bias goes negative, this causes aliasing, totally within mathematical expectations and the specifications.
What a retort! "I can't refute what he says so I'll just ignore it and insult the competition instead!"
Where's the insult? It's all fact.
 
Nick said:
Besides, as noted before, ATI chips also show some shimmering, just less noticable. This is totally in accordance to my theory of NVIDIA using a more accurate formula. And most of all it clearly shows that it is indeed an application problem.
Your "logic" is non-sensical. "A looks worse than B, therefor A is more accurate."

The NV40 introduced a new LOD algorithm (compared to the NV2x and NV3x chips). The new algorithm uses a more circular calculation. If one compares the results of the NV40 LOD with the older chips, one sees that at off-andles the NV40 changes miplevels slightly sooner (i.e. things are a little blurrier). This is ok.

Now, when using a negative LOD, I'd expect the same behavior: That is, the NV40 should still be slightly blurrier compared to older chips. However, that doesn't appear to be the case. Ergo, something is wrong with negative LOD handling on the NV40.

That is a logical argument based on observable facts. Nowhere did I appeal to a "mythical" optimization which may or may not exist.

Now you give it a try.
 
As far as I know the Geforce 4 doesn't use adaptive anisotropic filtering. Hence it uses more samples than required. This causes a negative LOD bias to still look good, while it doesn't have to.

Yes it does in the respect that if you select a maximum anisotropy, say, 8x it'll only do 8x the normal number of samples if the position of the texture actually requires it; it'll do less if the texture needs less (this was confimred by NVIDIA).
 
andypski said:
Clamping all negative LOD Biases to 0 can hardly be regarded as a 'fix'.
It can. Or rather, as a workaround for a bug, which setting a negative LOD bias is in almost any case. There are exceptions, but these are usually not in the domain of "rendering a 3D object in a visually pleasing way", but about other calculations.

andypski said:
Of course they do - as the LOD bias becomes more negative, aliasing will gradually increase, however our hardware seems to have no need for this negative bias clamping, and our LOD bias control works in accordance with the D3D specification.
So you did fix the "positive LOD bias with AF"-bug in R420? ;)


OpenGL guy said:
Hold it. If you set a certain LOD bias, then there is a mathematical outcome that one expects. If it's a positive bias, then you should expect things to get blurrier by a certain amount. If it's a negative bias, then you should expect things to get sharper by a certain amount. If one platform is showing far more aliasing with negative biases than is expected based on the mathematics, then that sounds like a platform issue.
Right. But you might want to prove that there really is "far more aliasing [...] than is expected based on the mathematics" on that platform. If not, it's the application's fault.
 
excuse me for interfering w/o actually observing the issue at hand, but just two unrelated remarks based on what i've read in this thread so far:
  • a sound aniso algo should not produce unproportionally-large undersampling by the introduction of a negative lod bias - ie. a slight negative lod bias should not result in significantly more shimmering than the same slight bias would introduce in an isotropically-sampled surface.
  • artificially clamping the lod bias is wrong. i'll repeat it for a hundred-and-first time on this board: drivers doing stuff behind the back of the developer is a bad practice. 'let's fix this issue by breaking that functionality' is as flawed thinking as it could ever get. on a sidenote, customers who tolerate such behaviour (from any card vendor) should not expect to get proper visuals in any of their 'hard-earned-cash-paid' titles (aside from the one the hack was originally introduced for, that is).
 
Xmas said:
OpenGL guy said:
Hold it. If you set a certain LOD bias, then there is a mathematical outcome that one expects. If it's a positive bias, then you should expect things to get blurrier by a certain amount. If it's a negative bias, then you should expect things to get sharper by a certain amount. If one platform is showing far more aliasing with negative biases than is expected based on the mathematics, then that sounds like a platform issue.
Right. But you might want to prove that there really is "far more aliasing [...] than is expected based on the mathematics" on that platform. If not, it's the application's fault.
Apps shouldn't need to care about what platform they are running on. LOD calculations are very specifically spelled out in the specs and applications rely on that.
 
andypski said:
There is no proof here that this is a tolerance issue, or that it's for any good reason at all - you have speculated into existence some notionally superior calculation that takes less samples, and then said that when it goes wrong by aliasing heavily that this is a result of its superiority. You have offered no proof whatsoever that ATI hardware isn't taking exactly the right number of samples and that NV40 is actually taking too few and this is the cause of the aliasing. This explanation is just as possible, but you have chosen to assume that nVidia's calculation is superior despite the apparently inferior results with negative LOD bias applied - dubious logic.
The proof is exactly the shimmering and the fact that clamping the LOD bias solves it. Please read my mathematical explanation in my previous message.
Except that the application really isn't doing anything wrong per-se, there is nothing fundamentally wrong with using negative LOD biases.
Unfortunately there is. When using a negative LOD bias you select, by definition, a mipmap level that is too high-detail for the minimal filter. Hence you introduce aliasing. If that doesn't convince you, then please explain to me why ATI chips also show shimmering, albeit at slightly bigger biases?
How do you know it is not incorrect to have such severe aliasing such that you can make such a categorical statement?
Because clamping the LOD bias fixes the 'problem'!
You're making an assumption, not dictating divine wisdom. You have two pieces of hardware, and one is showing worse artifacts than the other - you can't simply state that nothing is wrong and have it accepted as truth.
At least I have a theory what is going 'wrong'. If you have a better one, please explain it to me.
If you're correct, then yes, but assuming that something is better when it apparently produces worse results is a really strange position to take.
It would be better in the sense that it takes the minimum required number of samples, and not more, so performance is maximized. Due to tent filters not being perfect it will always benefit quality to take more samples, but at some point it just has not much real benefit. The fact that the LOD bias clamp fixes the aliasing is very convincing to me that they do not use less than the minimum number of samples.
I don't think there's anything coincidental about games that use mild negative LOD biases not aliasing horrendously on our hardware.
I won't question whether or not it was a deliberate choice to over-sample on ATI hardware. All I can deduce is that NVIDIA chose to take full advantage of anisotropic filtering optimizations and is still whithin specifications.
 
darkblu said:
  • artificially clamping the lod bias is wrong. i'll repeat it for a hundred-and-first time on this board: drivers doing stuff behind the back of the developer is a bad practice. 'let's fix this issue by breaking that functionality' is as flawed thinking as it could ever get. on a sidenote, customers who tolerate such behaviour (from any card vendor) should not expect to get proper visuals in any of their 'hard-earned-cash-paid' titles (aside from the one the hack was originally introduced for, that is).
So let's just do away with AA, AF, VSync, LOD bias, Triple buffering, Z buffer format, alternate pixel center, etc. settings in the driver control panel, and let the application render "the way they are intended to work", even if that means tearing, stuttering, jagged edges, blurred textures, etc., simply put: an overall worse experience for the user. Because, obviously, all those things happen behind the back of the developer. And let's hope the developer belongs to the small group of clever ones that actually put the ability to control all those settings in-game.

Sorry, but I prefer the option of overriding application settings if it improves the game experience.


OpenGL guy said:
Apps shouldn't need to care about what platform they are running on. LOD calculations are very specifically spelled out in the specs and applications rely on that.
Exactly my point.

Devs actually should rely on the math, and not experimentally decide that a certain negative LOD bias doesn't cause aliasing on a certain platform, although they cannot be sure another platform that is still inside the spec behaves exactly identical.
 
darkblu said:
a sound aniso algo should not produce unproportionally-large undersampling by the introduction of a negative lod bias - ie. a slight negative lod bias should not result in significantly more shimmering than the same slight bias would introduce in an isotropically-sampled surface.
Good observation, but is there any indication that this is actually the case?
artificially clamping the lod bias is wrong. i'll repeat it for a hundred-and-first time on this board: drivers doing stuff behind the back of the developer is a bad practice. 'let's fix this issue by breaking that functionality' is as flawed thinking as it could ever get. on a sidenote, customers who tolerate such behaviour (from any card vendor) should not expect to get proper visuals in any of their 'hard-earned-cash-paid' titles (aside from the one the hack was originally introduced for, that is).
I completely agree, but unfortunately the only other option is to explain to every programmer why they shouldn't use a negative LOD bias (unless for low-frequency textures) and expect them to release a patch instantly (if not already configurable). So I welcome every optional driver change that has the same effect. Has there been negative critisicm yet on this fix?

Not only game developers expect the hardware to do closely what they specify, but also the people who buy the games and the cards. Anything that gives me more control over quality and performance is a good thing. Doing something 'behind the back' that is not controllable and has a visable influence (it's not a pure optimization), is indeed bad practice.
 
OpenGL guy said:
Apps shouldn't need to care about what platform they are running on. LOD calculations are very specifically spelled out in the specs and applications rely on that.
What happens with negative LOD biases is actually beyond the specifications. Well, not completely, it still effectively selects a more detailed mipmap level, but whether or not this causes aliasing is not specified there. It depends on the filter, and how many samples are used. Anisotropic filtering is specified to only require enough samples to avoid aliasing at LOD bias zero. So it's actually more expected that aliasing will occur with negative LOD biases. And yes it does occur on ATI hardware as well. NVIDIA just seems to take better advantage of the adaptive anisotropic filtering optimization, but as far as I know it's still perfectly whithin specification that it doesn't cause aliasing at LOD bias zero. Hence the 'fix' with the clamping.

Of course it's hard for game developers to have anticipated this. All previous hardware had a high tolerance for negative LOD biases, while this isn't a requirement. So unless they either didn't touch the LOD bias at all, it's very likely that they determined a pleasing value experimentally, at the cost of just slightly more aliasing. Unless they did check the anisotropic filtering math and determined the highest frequency in every texture, it's unavoidable that this causes worse aliasing on hardware that does perform more accurate optimizations.

Don't get me wrong. Taking 'too many' samples still provides higher quality due to filter imperfections and such, but this is negligible compared to taking the minimum required number of samples at the correct LOD bias. Nyquist's rate is two, anything more is theoretically wasted. In practice with a tent filter it still can be worth it to avoid minor artifacts but at this point it becomes totally subjective and it's outside the specifications. If you absolutely think this is required, ATI is the better choice. If you just want mathematically sound filtering then NVIDIA is simply flawless, although it sometimes requires the application fix.

It's just sad that the ATI fans think it's a hardware issue 'because' their cards don't have the same characteristics. But considering the advantages of the more agressive optimizations it's not unlikely that their future chips will use it too. Yes, I would welcome this too.
 
If I understand Nick correctly (and the fact that NV4x filters correctly at a bias of 0 is quite a bit of proof), Nvidia hardware takes the minimum number of samples required so that neutral LOD bias produces no shimmering. They (the IHV) assumed developers would use a bias of 0 if they wanted their apps to show no shimmering.

If the above is true, the issue becomes a question of whether it is correct for an IHV to assume an LOD bias of 0 as standard (for the sake of saving performance/transistors) or if its correct to construct a loser, less optomized, and more hardware demanding algorithm that allows programmers to experiment with bias value losely without producing shimmering.

Point is:

Either the programmer conforms to the strict regulations of the hardware, for mathematically proper results, or the hardware conforms to the practices of the programmer and is designed with less optimization and more room for developer error.
 
OpenGL guy said:
Your "logic" is non-sensical. "A looks worse than B, therefor A is more accurate."
Please read more carefully what I'm saying. Yes the NV40 does look worse with negative LOD biases, but this is the cause of the level of anisotropy determination formula. In my theory, this formula would be closer to the minimum specifications, more accurate.
Now, when using a negative LOD, I'd expect the same behavior: That is, the NV40 should still be slightly blurrier compared to older chips. However, that doesn't appear to be the case. Ergo, something is wrong with negative LOD handling on the NV40.
I'm sorry, but a negative LOD bias actually means using higher resolution mipmap levels, not blurrier ones.
That is a logical argument based on observable facts. Nowhere did I appeal to a "mythical" optimization which may or may not exist. Now you give it a try.
This was a fact based on specifications. Thanks for giving me a try.
 
DaveBaumann said:
Yes it does in the respect that if you select a maximum anisotropy, say, 8x it'll only do 8x the normal number of samples if the position of the texture actually requires it; it'll do less if the texture needs less (this was confimred by NVIDIA).
Thanks for the information. Could it be possible that the formula they used then was less accurate, more conservative than the one used in NV40?
 
Nick, when you use the term "accurate" in relation to NV40's AF implementation, are you referring to its efficiency at determining the minimum number of samples required for the most practical/notable results?
 
Luminescent said:
Nick, when you use the term "accurate" in relation to NV40's AF implementation, are you referring to its efficiency at determining the minimum number of samples required for the most practical/notable results?
Yes, how closely the formula follows the ideal curve that determines the minimum number of samples.
 
Luminescent said:
Understood, but why, on a given level of bias (particularly the lower levels), does the NV4x's implementation of aniso produce more shimmering than Ati's? What sort of shortcuts to the aniso method of tex filtering would produce such visible artifacts?
Taking too few samples (but from the "right" mipmap level) would cause this issue.

This has nothing to do with the angular dependency thing -- I'm referring to the shape of the AF "flower" here. You can take too few samples and still have a near perfect flower shape.

I'm quite annoyed by the knee-jerk apologetic "that's the ATI way" attempts at an explanation. ATI's AF is not less sharp at default lod bias ... but ...
both IHVs have, on top of this, extra AF "optimizations" that cause shimmering. These can be disabled on both sides of the fence with current drivers. This is not the cause of NV40's texturing issues, because you still get shimmering even if you turn off all "optimizations" and stay at the default lod bias.

These two things must be viewed separately. Competitive texture quality comparisons should be made with these "optimizations" disabled for both NVIDIA and ATI.

OTOH I like the option to disallow negative lod bias settings. It's an effective way to protect users from stupid developer decisions.
I was thinking about implementing this kind of thing myself for a while, but I'd add some more: dynamically increase AF strength in exchange for lod bias. Eg if an application runs at 4xAF and uses a lod bias of -1, keep the lod bias at zero and bump max AF to 8x. You get the idea.
 
Back
Top