Anisotropic Filtering and LOD Bias

Nvidia released the 67.02 betas today which included the following release note for a fix to the 6800's supposed shimmer:
Nvidia said:
Added new Performance and Quality option—Negative LOD bias

This control lets the user manually set negative LOD bias to "clamp" for applications that automatically enable anisotropic filtering. Applications sometimes use negative LOD bias to sharpen texture filtering. This sharpens the stationary image but introduces aliasing when the scene is in motion.

Because anisotropic filtering provides texture sharpening without unwanted aliasing, it is desirable to clamp LOD bias when anisotropic filtering is enabled. When the user enables anisotropic filtering through the control panel, the control is automatically set to "Clamp".
Why is it that a low (negative) lod bias visibly affects the NV40 line moreso than Ati? Is it the anisotropic algorithm that's implemented in the processor's texture units? What sort of underlying hardware issue could produce such a sensitivity to low lod bias?
 
It's a sad thing such an option is really necessary... setting a negative LOD bias should be considered a bug (with a few rare exceptions).
 
The problem has to do with software messing with LOD biases. Especially when the software developer uses (and thinks) NV hardware is "de facto".
 
I'm just wondering why NV2x, NV3x, R3xx, R4xx suffer from visibly less shimmering than NV4x when filtering low lod bias textures?

Yes we know that low lod biases produce a sharper still image and a more aliased image, when in motion, but what makes this effect more pronounced on the NV4x line?
 
Luminescent said:
I'm just wondering why NV2x, NV3x, R3xx, R4xx suffer from visibly less shimmering than NV4x when filtering low lod bias textures?

Yes we know that low lod biases produce a sharper still image and a more aliased image, when in motion, but what makes this effect more pronounced on the NV4x line?

I wish I knew, hopefully these drivers will cure the mess that is Bf:1942 and City of Heroes
 
Luminescent said:
I'm just wondering why NV2x, NV3x, R3xx, R4xx suffer from visibly less shimmering than NV4x when filtering low lod bias textures?

Yes we know that low lod biases produce a sharper still image and a more aliased image, when in motion, but what makes this effect more pronounced on the NV4x line?

nV40 unfortunately uses an ATi inspired method of anisotropic filtering. Unfortunately this method looks like total garbage compared to what previous nVidia GPU's could output. It's a black mark on nV40 which is otherwise a fine GPU.

We can only hope nVidia will remove this new AF mode for NV5x and go back to the old AF implimentation (perhaps they can offer this new mode alongside the old method).
 
radar1200gs said:
Luminescent said:
I'm just wondering why NV2x, NV3x, R3xx, R4xx suffer from visibly less shimmering than NV4x when filtering low lod bias textures?

Yes we know that low lod biases produce a sharper still image and a more aliased image, when in motion, but what makes this effect more pronounced on the NV4x line?
nV40 unfortunately uses an ATi inspired method of anisotropic filtering. Unfortunately this method looks like total garbage compared to what previous nVidia GPU's could output. It's a black mark on nV40 which is otherwise a fine GPU.
That's all very well, but given that NV40's AF method is supposedly 'ATI inspired' it doesn't really provide any answer to the original question, which was "...why do nv2x, nv3x, R3xx, R4xx suffer from visibly less shimmering than NV4x when filtering..." - if R3xx/R4xx and NV4x are using the same method then why do the ATI parts shimmer less?

You can't blame NV40's aliasing on their use of an ATI method when ATI's cards don't have the same symptoms - that makes no sense at all.

[sarcasm]Maybe it's those extra bits of LOD fraction on NV4x that are causing the increased shimmering? :oops: [/sarcasm]
 
andy pulaski - ATI said:
You can't blame NV40's aliasing on their use of an ATI method when ATI's cards don't have the same symptoms - that makes no sense at all.

Actually, "ATI's cards" (including X800 series) do have some of the same symptoms, just read some of the Half Life 2 reviews. The NV4x cards seem to have it more in Half Life 2, but that doesn't mean the issue doesn't exist on ATI cards. And from what I have heard, NV's newer driver sets greatly help to reduce, if not eliminate for some, shimmering.
 
Understood, but why, on a given level of bias (particularly the lower levels), does the NV4x's implementation of aniso produce more shimmering than Ati's? What sort of shortcuts to the aniso method of tex filtering would produce such visible artifacts?

Perhaps what I'm attempting to determine is whether this is an issue that could be fixed at a lower level, perhaps through a driver tweak has to do with the very means by which NV4x filters texels - a layer below the lod bias clamping fix or not. In other words, would such a shimmering issue be fixeable at the compiler level, or is most likely resulting from a hardwired issue?
 
jimmyjames123 the name mangler said:
Actually, "ATI's cards" (including X800 series) do have some of the same symptoms, just read some of the Half Life 2 reviews. The NV4x cards seem to have it more in Half Life 2, but that doesn't mean the issue doesn't exist on ATI cards. And from what I have heard, NV's newer driver sets greatly help to reduce, if not eliminate for some, shimmering.
I don't believe we suffer from the same symptoms - I am not aware of any apparent texture shimmering on ATI hardware beyond a level that is pretty much unavoidable when using a linear reconstruction filter (and yes, bilinear, trilinear and anisotropic filtering all use a linear filter) - This shimmering occurs simply because a linear filter is not very good - you would need to apply a significant positive LOD bias all the time in order to avoid aliasing on all possible texture map content, but if we were to make such a high positive LOD the default then it would result in the vast majority of textures appearing far more blurry than they should. (Not to mention that we would fail WHQL which expects the LOD to be where it is at the moment)

From my understanding the latest NV drivers only reduce the shimmering by forcing some LOD-clamping using the control panel when anisotropic filtering is enabled. We do not clamp the LOD during anisotropic filtering - we do what the app requests, and yet we apparently do not have anything like the same amount of aliasing when using anisotropic filtering, so why is this control required even with all other optimisations 'disabled' on NV40?
 
Luminescent said:
Understood, but why, on a given level of bias (particularly the lower levels), does the NV4x's implementation of aniso produce more shimmering than Ati's? What sort of shortcuts to the aniso method of tex filtering would produce such visible artifacts?
It's probably just using another approximation formula to get the anisotropy level. Just like ATI uses bad approximations for mipmap level.

Fixable at driver level? Yes, that's exactly what they did. But I assume it's not fixable at hardware level without increased transistor cost. Plain and simply, there are compromises that have to be made. All in all I think the Geforce 6 Series are magnificent.
 
nVidia has less experience with this angle dependant AF style than ATi does (ATi has been using it since R200) and therefore ATi has a lot more experience optimizing and hiding the shortcomings of the method than nVidia.

Just think about how brilinear evolved. AT first it was horribly noticeable, now you have great trouble telling it apart from real trilinear.

Even so, even if nVidia can vastly improve the AF filtering quality on nV40, the option should be there to turn it off entirely if the user doesn't want it.

My comments on giving users choice apply to all IHV's, and as I've said from the very start (nV40's debut) the angle dependant AF was a mistake on nVidia's part.
 
Nick said:
It's probably just using another approximation formula to get the anisotropy level. Just like ATI uses bad approximations for mipmap level.

Fixable at driver level? Yes, that's exactly what they did. But I assume it's not fixable at hardware level without increased transistor cost. Plain and simply, there are compromises that have to be made. All in all I think the Geforce 6 Series are magnificent.
Clamping all negative LOD Biases to 0 can hardly be regarded as a 'fix'.

And if, as you suggest, they are using an approximation formula that is causing such major artifacts then why have they previously claimed that things like an 8-bit LOD fraction are actually important - it just doesn't make any sense at all to me.
 
I agree with both Andy and Nick, in part, for the time being. That said, I have trouble finding any other shortcummings in the NV4x line, just as I had trouble finding any with R3xx (never owned R4xx). It's just that this shimmering issue has become most evident to me this gen and is somewhat bothersome to the perfectionist in me.

I hope texture shimmering, whether in normal maps or on textures with even a neutral lod bias, goes away in its entirety by next gen.
 
andypski said:
Clamping all negative LOD Biases to 0 can hardly be regarded as a 'fix'.
It can absolutely. Adaptive anisotropic filtering works in the assumption that the application uses a correct LOD bias. In most cases, it shouldn't be negative. The reason why ATI's anistropic filtering might be 'better' is when it's more conservative. It could, even with a slightly negative LOD bias, still take more samples than would be strictly required. Looking at it this way, it's quite possible that NVIDIA's approach is actually superiour theoretically. Only in practice when negative LOD biases are used it fails.

This is only natural. It's very comparable to multi-sampling. It's an optimization hack, which works well in 99% of all situations and everybody appreciates the good performance characteristics. In cases where it fails, artificially modifying the input is the only fix.
And if, as you suggest, they are using an approximation formula that is causing such major artifacts then why have they previously claimed that things like an 8-bit LOD fraction are actually important - it just doesn't make any sense at all to me.
Well, like I said, it's quite possible they do use a more accurate formula. But this brings them on the edge of conservative accuracy. When negative LOD biases are used, this fails.

At a certain point ATI chips start showing shimmer too, or don't they?
 
Nick said:
When negative LOD biases are used, this fails.
Actually I can't even call this failing. When you use a negative LOD bias you ask for texture shimmering, no matter if anisotropic filtering is on or off. You literally use a mipmap level that is too detailed for the filter. In a certain way it would be a failure to use the optimization opportunity if the aliasing doesn't happen. So it's an application problem. NVIDIA gave us the option to fix this bad programming at the driver level.
 
Nick said:
andypski said:
Clamping all negative LOD Biases to 0 can hardly be regarded as a 'fix'.
It can absolutely. Adaptive anisotropic filtering works in the assumption that the application uses a correct LOD bias. In most cases, it shouldn't be negative. The reason why ATI's anistropic filtering might be 'better' is when it's more conservative. It could, even with a slightly negative LOD bias, still take more samples than would be strictly required. Looking at it this way, it's quite possible that NVIDIA's approach is actually superiour theoretically. Only in practice when negative LOD biases are used it fails.
That doesn't make any sense to me - all texture filtering works on the assumption that the application uses a 'correct' LOD bias. If the LOD bias is too high you will get aliasing, if the bias is too low you will get blurring. Ignoring the bias set by the application is not a 'fix' at all - an approach that breaks badly because the application sets a mildly negative LOD bias is definitely not 'superior'. It is perfectly legal for an application to set a negative LOD bias - clamping that bias to 0, however, is not legal.

This is only natural. It's very comparable to multi-sampling. It's an optimization hack, which works well in 99% of all situations and everybody appreciates the good performance characteristics. In cases where it fails, artificially modifying the input is the only fix.
Of course, disabling the 'optimisation hack' could be the fix, rather than ignoring the negative bias so that the optimisation hack still works...

If the optimisation hack can't be disabled then clamping the LOD to zero will remove the aliasing, but it's not a 'fix'. It's removed some artifacts at the expense of breaking the behaviour of the LOD bias control.

And if, as you suggest, they are using an approximation formula that is causing such major artifacts then why have they previously claimed that things like an 8-bit LOD fraction are actually important - it just doesn't make any sense at all to me.
Well, like I said, it's quite possible they do use a more accurate formula. But this brings them on the edge of conservative accuracy. When negative LOD biases are used, this fails.
Well, that's great - this mysterious 'possibly more accurate' formula whose existence is impossible to prove or disprove results in totally unnecessary shimmering when minor levels of negative LOD are applied. That sounds like a broken formula rather than a more accurate one to me.

ATI hardware must be doing really well to have such good comparative performance with anisotropic filtering while at the same time apparently taking more samples than are strictly necessary to avoid aliasing.

At a certain point ATI chips start showing shimmer too, or don't they?
Of course they do - as the LOD bias becomes more negative, aliasing will gradually increase, however our hardware seems to have no need for this negative bias clamping, and our LOD bias control works in accordance with the D3D specification.
 
I skipped around when reading this thread so please forgive me if I repeat what has already been said.

In another thread on the shimmering topic we were looking at UT2004 and how it uses a negative LOD bias. By altering this negative value in the UT2004.ini configuration file we were able to alleviate the problem. Of courser this had to be the test candidate for this new ForceWare option that I have been so desperately waitingfor. Sure enough, using the default configuration (DefaultTexMipBias=-0.500000) instead of setting this value to 0 or 0.2 (which I found to be best) and using the "clamp" feature in 67.02, the problem is now gone. It looks and plays great. Woohoo!

Now, how can they fix the moire I am seeing in so many games? Try Call of Duty: United Offensive Ponyri level. The train station is a moire mess when using 'extra' textures and the iron sights on the MP44 is always a shimmering mess when in the lowered position (not aiming down the iron sights).
 
Nick said:
Actually I can't even call this failing. When you use a negative LOD bias you ask for texture shimmering, no matter if anisotropic filtering is on or off. You literally use a mipmap level that is too detailed for the filter. In a certain way it would be a failure to use the optimization opportunity if the aliasing doesn't happen. So it's an application problem. NVIDIA gave us the option to fix this bad programming at the driver level.
Please go and tell Tim Sweeney that his programming is bad, since UT2003 uses a negative LOD bias on some textures by default - maybe he did this for no reason and simply decided to break his application, but I suspect that he actually wanted it that way.

If you apply a pre-filter and band-limit your texture data it is certainly possible to apply negative LOD without causing visible aliasing - I guess some programmers might use methods like this to try to manipulate the hardware's filtering effects for what they regard as a more pleasing look. If you then clamp the LOD then you've broken the look that they were trying to create.

An application is perfectly within its rights to use a negative LOD bias if it so chooses, otherwise there would not be provision for negative LOD biases in the spec. It's not an application problem, and the same applications basically cause no real problems on ATI hardware - they are behaving within the API specification.
 
andypski said:
It is perfectly legal for an application to set a negative LOD bias...
Absolutely. But you can't expect that not to cause shimmering. Set your LOD bias to -1 and it will definitely not look good on ATI chips either. NVIDIA just uses a lower tolerance, for good reason.
...clamping that bias to 0, however, is not legal.
Who's talking about legal here? Any method to fix what an application does wrong is in my eyes a good thing. And it's optional. If you like the effect a negative LOD bias gives, by all means, disable the clamping.
It's removed some artifacts at the expense of breaking the behaviour of the LOD bias control.
It doesn't. Negative LOD bias gives you shimmering on NVIDIA and ATI chips. One is just more sensitive than the other, but not incorrect.
Well, that's great - this mysterious 'possibly more accurate' formula whose existence is impossible to prove or disprove results in totally unnecessary shimmering when minor levels of negative LOD are applied. That sounds like a broken formula rather than a more accurate one to me.
I suggest reading my explanation again why this happens. If I'm correct then NVIDIA's anisotropic filtering is better optimized, not broken. ATI's implementation is less optimized but has the advantage of showing less artifacs when the application is badly written.

You could always render at 3200x2400 resolution and downsample that to 800x600. There won't be any shimmering at all, but this just isn't efficient. I'm sure that ATI wished they had the same optimization so their performance was a few percent higher. And they would gladly add the fix to keep the LOD bias positive.
ATI hardware must be doing really well to have such good comparative performance with anisotropic filtering while at the same time apparently taking more samples than are strictly necessary to avoid aliasing.
Given the higher clock frequency and less features this isn't much of a surprise to me.
Of course they do - as the LOD bias becomes more negative, aliasing will gradually increase, however our hardware seems to have no need for this negative bias clamping, and our LOD bias control works in accordance with the D3D specification.
Coincidentally these mentioned games use a negative LOD bias that doesn't cause bad effects on ATI chips. Still, NVIDIA's method is totally in accordance with DirectX specifications.
 
Back
Top