Anisotropic Filtering and LOD Bias

Nick said:
Besides, as noted before, ATI chips also show some shimmering, just less noticable. This is totally in accordance to my theory of NVIDIA using a more accurate formula. And most of all it clearly shows that it is indeed an application problem.
How can more shimmering be more accurate?
Texture filtering is not a new idea. A RivaTNT can do perfect trilinear filtering. A Geforce 2 can do perfect 2xAF. They go by the book, and produce the expected shimmer-free results. That's what I call accurate.
Calling a newer method that produces a worse image "more accurate" just doesn't make any sense. The opposite would.
 
I'd be interested in someone trying out various Nvidia drivers on the NV4x all the way back to the original driver release (61.45 Beta??) that supported NV4x to see when this aliasing started. I don't remember seeing any mention of aliasing issues when reading the initial 6800 reviews. If the aliasing was there when using the original drivers then maybe it's a hardware problem, if it only shows up in later driver releases then its probably Nvidia up to their old NV3x tricks again of trading quality for performance. Anyone with a 6800 looking for something to do?
 
Nick said:
OpenGL guy said:
Your "logic" is non-sensical. "A looks worse than B, therefor A is more accurate."
Please read more carefully what I'm saying. Yes the NV40 does look worse with negative LOD biases, but this is the cause of the level of anisotropy determination formula. In my theory, this formula would be closer to the minimum specifications, more accurate.
Your argument is still nonsensical. What you are saying is that because it shows more aliasing, it is more optimized/accurate. Talk about fallacy...
Now, when using a negative LOD, I'd expect the same behavior: That is, the NV40 should still be slightly blurrier compared to older chips. However, that doesn't appear to be the case. Ergo, something is wrong with negative LOD handling on the NV40.
I'm sorry, but a negative LOD bias actually means using higher resolution mipmap levels, not blurrier ones.
Did you read the sentences beforehand? Do you see that I am comparing the output of the NV40 vs. previous generation chips? I've highlighted it to make it clear.
That is a logical argument based on observable facts. Nowhere did I appeal to a "mythical" optimization which may or may not exist. Now you give it a try.
This was a fact based on specifications. Thanks for giving me a try.
You have based nothing on specifications whatsoever! All you have done is claim that "more aliased is more correct". Prove it!
 
Khronus said:
I'd be interested in someone trying out various Nvidia drivers on the NV4x all the way back to the original driver release (61.45 Beta??) that supported NV4x to see when this aliasing started. I don't remember seeing any mention of aliasing issues when reading the initial 6800 reviews. If the aliasing was there when using the original drivers then maybe it's a hardware problem, if it only shows up in later driver releases then its probably Nvidia up to their old NV3x tricks again of trading quality for performance. Anyone with a 6800 looking for something to do?
I don't think the aliasing could result in performance gains. Typically it's always better on a graphics card to prevent aliasing to improve texture cache hits. No, it's most likely a result of the angle-dependent anisotropy that they have implemented.
 
Nick said:
Could it be possible that the formula they used then was less accurate, more conservative than the one used in NV40?

While I can't comment on the exact algorithm(s) they used, since I don't know (although IIRC Digit Life / xbit have some article on it IIRC) from my knowledge of NV2x (and before) I would say almost certianly not. NV2x appeared to stick fairly rigidly to the the methods suggested by the OpenGL - I would say that GF4 is probably one of the closest things you are going to see to a texture filtering reference on a consumer board (when LOD's and the like aren't phutzed around with).
 
zeckensack said:
How can more shimmering be more accurate?
Sigh. Sorry but please read all my posts again. I explained this more than enough.
Texture filtering is not a new idea. A RivaTNT can do perfect trilinear filtering. A Geforce 2 can do perfect 2xAF. They go by the book, and produce the expected shimmer-free results.
You can 'go by the book' and still over-sample. That's the conservative approach. In that case you'll reduce aliasing for slightly negative LOD biases. It doesn't mean that when aliasing does happen we're out of specification!
That's what I call accurate. Calling a newer method that produces a worse image "more accurate" just doesn't make any sense. The opposite would.
A Geforce 2 will still cause shimmering when the LOD bias is too big. Do you call that accurate? No, it's just false to expect it to be shimmer-free. A negative LOD bias equals selecting a mipmap that will introduce aliasing.
 
OpenGL guy said:
Your argument is still nonsensical. What you are saying is that because it shows more aliasing, it is more optimized/accurate. Talk about fallacy...
How many times do I have to repeat this? One last try: yes a more accurate formula can cause a lower quality result. It also works the other way around: if the anisotropic filtering formula was horribly 'inaccurate' and always took 256 samples, the image quality would be fantastic. Fallacy? No. Mathematical fact. Period.
Did you read the sentences beforehand? Do you see that I am comparing the output of the NV40 vs. previous generation chips? I've highlighted it to make it clear.
Yes I read what you said. You said NV40 should be more blurred, less aliased than previous chips. There is absolutely nothing that specifies that this is required.
 
Chalnoth said:
I don't think the aliasing could result in performance gains.
The aliasing doesn't occur when the LOD bias is clamped.
Typically it's always better on a graphics card to prevent aliasing to improve texture cache hits. No, it's most likely a result of the angle-dependent anisotropy that they have implemented.
Taking more samples costs more time, doesn't it? And it can still be bandwidth limited internally. So I think it does make a lot of sense to make the anisotropy determination formula as accurate as possible.
 
DaveBaumann said:
While I can't comment on the exact algorithm(s) they used, since I don't know (although IIRC Digit Life / xbit have some article on it IIRC) from my knowledge of NV2x (and before) I would say almost certianly not. NV2x appeared to stick fairly rigidly to the the methods suggested by the OpenGL - I would say that GF4 is probably one of the closest things you are going to see to a texture filtering reference on a consumer board (when LOD's and the like aren't phutzed around with).
Thanks again for the information! This doesn't explain the observations though. Isn't it likely that the OpenGL specifications suggest an implementation that is still conservative?

If someone has a better explanation please let me know...
 
Nick said:
zeckensack said:
How can more shimmering be more accurate?
Sigh. Sorry but please read all my posts again. I explained this more than enough.
You explained a lot of things, but I didn't find a good explanation for "more accurate". Sorry :?
Nick said:
You can 'go by the book' and still over-sample. That's the conservative approach. In that case you'll reduce aliasing for slightly negative LOD biases. It doesn't mean that when aliasing does happen we're out of specification!

A Geforce 2 will still cause shimmering when the LOD bias is too big. Do you call that accurate? No, it's just false to expect it to be shimmer-free. A negative LOD bias equals selecting a mipmap that will introduce aliasing.
You're missing the point, which is that NV4x has issues with the lod bias where it belongs, at zero.
Besides, you should try. Even with a "reference quality" implementation you're getting into shimmer land very quickly, with high frequency texture content. I still have a Geforce 3 in use and it starts falling apart at -0.25. Honest.

The problem is that you can't take half a sample. Sample counts are inherently integers. Which means that yes, you might have to take "half" a sample too many to avoid artifacts. And you may well call that conservative, no argument here. This still doesn't explain why you believe the solution that produces artifacts at, let me repeat, lod bias zero to be more accurate.

You must round sample counts to integers. You either round up (conservative) or you round down (aggressive). There is no "accurate" middle ground.
 
zeckensack said:
Texture filtering is not a new idea. A RivaTNT can do perfect trilinear filtering. A Geforce 2 can do perfect 2xAF. They go by the book, and produce the expected shimmer-free results. That's what I call accurate.

Just curious, where is this "book" specification that defines exactly how AF should work. Few years back when I took one of my render technique classes the professor specifically stated on AF that there is no real standard way of doing it and that a lot of companies (both software renderers and hardware ones) keep their AF algorithms secret on how exactly they work. Yeah, the basic idea is you are trying to take more samples within the area on the texel the pixel happens to fall on at a higher LOD mipmap. Oftentimes its done with an ellipse instead of the quadrilateral the area really is so then are algorithms using ellipses not perfect? Should the sampling be regular? Should you adjust number of number samples and use lower LOD mipmaps on certain angles? There is plenty more questions you could ask on how to implement, do them, and clearly still call it AF.
 
zeckensack said:
You're missing the point, which is that NV4x has issues with the lod bias where it belongs, at zero.
Is that so?
I only know NV4x has "shimmering issues" from quite a lot of reports on the web. However, I haven't seen any thorough analysis yet (I'd do it if I had the hardware ;)). What I know is that it doesn't shimmer everywhere.

But IF this LOD bias clamp setting fixes the shimmering issues, it would indicate that NV4x renders correctly at LOD bias 0, and that either its negative LOD bias implementation does exactly what you should expect it to do, or it is broken. Which would be a bad thing, but such things happen, as you can see with the broken implementation of positive LOD bias in R3x0.

Besides, you should try. Even with a "reference quality" implementation you're getting into shimmer land very quickly, with high frequency texture content. I still have a Geforce 3 in use and it starts falling apart at -0.25. Honest.
With highest frequency texture content, you're already
in shimmer land with LOD bias zero, on any card I know of.
 
zeckensack said:
You explained a lot of things, but I didn't find a good explanation for "more accurate". Sorry :?
One of my replies to OpenGL guy explains it quite clearly in my opinion:
Nick said:
Yes the NV40 does look worse with negative LOD biases, but this is the cause of the level of anisotropy determination formula. In my theory, this formula would be closer to the minimum specifications, more accurate.
Or my reply to Luminescent:
Yes, how closely the formula follows the ideal curve that determines the minimum number of samples.
You're missing the point, which is that NV4x has issues with the lod bias where it belongs, at zero.
That's new to me, where did you get this info? Please define 'issues', without refering to previous implementations but math and specifications.

Either way it's unavoidable if the NV40's anisotropic formula is more accurate that there is a little bit more aliasing. Nyquist's rate is more than twice the maximum frequency and assumes a sinc filter. As I explained before, taking more samples still improves quality when using tent filters. Anyway, according to jimmyjames123 ATI isn't entirely free of artifacts either. It's simply unavoidable with limited filter sizes. The LOD bias clamp solves all excessive aliasing problems on NVIDIA hardware.
Besides, you should try. Even with a "reference quality" implementation you're getting into shimmer land very quickly, with high frequency texture content. I still have a Geforce 3 in use and it starts falling apart at -0.25. Honest.
Sorry, I currently only have a Radeon 9000. I'd love to have an NV4X to test whether it really breaks specifications. Whether or not a Geforce 3 tolerates a LOD bias of -0.25 is irrelevant. All I can do now is deduce an explanation from what others observed. If someone could perform the analytical tests for me that would be great.
The problem is that you can't take half a sample. Sample counts are inherently integers. Which means that yes, you might have to take "half" a sample too many to avoid artifacts. And you may well call that conservative, no argument here.
Indeed, the number of samples should be rounded up, always. But then there can still be many reasons why other implementations are 'less accurate'. For example, it could be possible that at an anisotropy level of say 5.7, ATI rounds this up to the nearest power of two ( 8 ), while NV40 rounds to 6. Another possibility is that while the true anisotropy is 5.7, conservative approximation formulas could make that for example 6.1 so ATI takes at least 7 samples while NV40's approximation is 5.8 and only 6 samples are taken. Maybe anyone can confirm or disproof any of the two?
You must round sample counts to integers. You either round up (conservative) or you round down (aggressive). There is no "accurate" middle ground.
I hope my detailed explanation shows how an "accurate middle ground" is actually possible. Your definition of 'agressive', rounding down, is definitely the wrong approach. If anyone can proof that NV40 does this then it's indeed a real hardware issue. For now, the fact that LOD bias clamping gives aliasing artifacts hardly distinguishable from ATI's artifacts indicates to me that it's more likely that the 'agressive' optimizations are actually 'more accurate' optimizations while still rounding up.

Thank you.
 
If you read into Nick's posts and ignore any personal semantic conceptions of the word "accurate", you'll find that his use of the word in this context is quite interchangeable with efficient; he is using the term to qualify a well optimized solution that does neither too much or too little to attain shimmer/artifact free results.

Of course, there is an assumption that has to be made before one decides what is too much or too little (in terms of samples). It seems very plausible that Nvidia's assumption was a neutral lod bias and they optimized (choose the most efficient solution for displaying shimmer free results) from there.

Perhaps we can make a come to a more decisive conclusion if we concide that, at an lod bias of 0, NV4x displays no shimmering or at least an amount equivalent to that presented by the R4xx series.
 
I think this is a bit of a cheap shot on NV's part, it forces developers to

A) Go with the negative lod we have been using and deal with angry NV users

B) Set it to zero and force ati users to oversample a little bit

or

C) Expect NV users to turn this feature on to fix the problem

It just seems like the best case for ati is for things to stay the same and the woarse case is they lose 1 or 2 fps. I don't know if NV did this on purpose but this is one of those basic things you would hope both cards had implemented the same.
 
Chalnoth said:
Khronus said:
I'd be interested in someone trying out various Nvidia drivers on the NV4x all the way back to the original driver release (61.45 Beta??) that supported NV4x to see when this aliasing started. I don't remember seeing any mention of aliasing issues when reading the initial 6800 reviews. If the aliasing was there when using the original drivers then maybe it's a hardware problem, if it only shows up in later driver releases then its probably Nvidia up to their old NV3x tricks again of trading quality for performance. Anyone with a 6800 looking for something to do?
I don't think the aliasing could result in performance gains. Typically it's always better on a graphics card to prevent aliasing to improve texture cache hits. No, it's most likely a result of the angle-dependent anisotropy that they have implemented.

I'd still like to see someone test the old drivers to see if it occurred. If the old drivers turn up shimmer free, then if not for performance what other reason would Nvidia have changed it? It's not like people are happy with the shimmering and if it were a bug you'd think they would have tracked it down by now.
 
There is nothing magical about LOD 0.

There isn't even anything special about LOD 0.

It is not the mystical point at which no aliasing occurs. Perhaps with an infinitely wide filter kernel it would be, but we don't have that.

If you feed certain textures into any current piece of 3D hardware (of which I am aware) at LOD 0 you can generate aliasing.

If a piece of hardware aliases worse at LOD -0.5 than another piece of hardware then by extension it will also alias worse at LOD 0.

Nick said:
Don't get me wrong. Taking 'too many' samples still provides higher quality due to filter imperfections and such, but this is negligible compared to taking the minimum required number of samples at the correct LOD bias. Nyquist's rate is two, anything more is theoretically wasted. In practice with a tent filter it still can be worth it to avoid minor artifacts but at this point it becomes totally subjective and it's outside the specifications. If you absolutely think this is required, ATI is the better choice. If you just want mathematically sound filtering then NVIDIA is simply flawless, although it sometimes requires the application fix
That's fine, but the problem is that all hardware is already effectively taking too few samples at LOD 0 due to the use of a linear filter, not the 'minimum required number' - you and I have both pointed this out in this thread, and yet in the same paragraph you also claim that NVIDIA's filtering is flawless because it is taking even less samples than prevous implementations.

Maybe their filtering is flawless, but the same can hardly be said for this line of reasoning. Naturally you would get increased performance from taking less samples, but this would come at the expense of increased aliasing all the time when compared to older generations. Hardly a step forward in terms of quality I should think.

If you want mathematically flawless filtering then you aren't going to get it with a linear filter at LOD 0.

A sinc filter is far better at reconstruction than a linear filter - the artifacts are not "totally subjective", they are visible, and they show up as aliasing. I could just as easily argue that all applications are bugged because they don't set a LOD of +1.0 - this would eliminate even more aliasing than clamping to 0.

Assuming that nV40's LOD bias control is producing a linear bias of the mip-map calculation as it should (ie. +1 moves you exactly 1 step down the mip chain) then it is inescapable that if it is aliasing more at LOD -0.5 then it is also aliasing more at LOD 0.

Is the shimmering gone with the LOD clamp set, or has it just reduced to about the same level as that in R4xx with the LOD set to -x.y as the application requested?

The proof is exactly the shimmering and the fact that clamping the LOD bias solves it
That is not proof of anything, and certainly is no proof of your theory - the only thing this tells us is that the shimmering is reduced, and of course the shimmering is reduced at LOD 0 compared to LOD -0.5. There would be even less shimmering at LOD 1, but that doesn't prove anything either except that LOD bias is functioning.

Your whole argument is that NV40's solution is more accurate and takes the minimum number of samples and therefore aliases worse. It is completely interchangable with an argument that other hardware was already taking the minimum number of samples (or, in fact, rather too few samples according to Nyquist) and that the NV40 solution is therefore simply deviating even further from producing the correct sampling by taking even fewer samples even at LOD 0.

Both theories would give rise to increased aliasing on NV40 at negative LODs, and both would be affected the same way by clamping the LOD to 0.

So, does NV40 alias more at LOD 0 than other hardware?

If it does alias worse, then the clamp to LOD 0 is just mitigating the symptoms. Perhaps a clamp to LOD 0.5 would be better so that users don't get subjected to more aliasing at the standard LOD setting than on other hardware? Might make some textures a bit blurry though.

If it doesn't alias worse than other hardware at LOD 0 then the question is - why does it alias so much worse at negative LOD, when the relationship between the amount of aliasing and the negative LOD bias applied should be basically linear?
 
andypski said:
So, does NV40 alias more at LOD 0 than other hardware?

IMO - absolutely yes.

I am serioulsy thinkign about abandonoing the 6800GT for an X800XT as The hoops I have to go through to reduce shimmering in certian apps only to end up witjh blurry textures and still shimmering is soo annoying.

NV40 behaviour in Bf:1942 and CoH is far far worse than my 9700Pro or 8500, the moire on textures is far worse e.g. floor grills in D3. And both shimmering and moire get worse as AF is increased not better. Increasing LOD to 0.5 and using SSAA isn't enough either.

I'll try these new drivers and maybe this fix will stay my hand.
 
Khronus said:
I'd still like to see someone test the old drivers to see if it occurred. If the old drivers turn up shimmer free, then if not for performance what other reason would Nvidia have changed it? It's not like people are happy with the shimmering and if it were a bug you'd think they would have tracked it down by now.
I noticed the problem in the drivers that shipped with my GeForce 6800, so I still contend that it has more to do with the angle-dependent anisotropy than anything.
 
flick556 said:
I think this is a bit of a cheap shot on NV's part, it forces developers to

A) Go with the negative lod we have been using and deal with angry NV users

B) Set it to zero and force ati users to oversample a little bit
B) is the stupidest shit I've ever heard. LOD bias should pretty much always be set to zero. It's the f'in default!
 
Back
Top