Nick said:
andypski said:
It is perfectly legal for an application to set a negative LOD bias...
Absolutely. But you can't expect that not to cause shimmering. Set your LOD bias to -1 and it will definitely not look good on ATI chips either. NVIDIA just uses a lower tolerance, for good reason.
This comment can't be serious. What good reason would that be? Didn't Andy already explain cases where negative LOD may be desirable?
...clamping that bias to 0, however, is not legal.
Who's talking about
legal here? Any method to fix what an application does wrong is in my eyes a good thing. And it's optional. If you like the effect a negative LOD bias gives, by all means, disable the clamping.
Why blame the app? Does this problem occur on GeForce 4 chips? What about FX? If not, then I don't think the problem lies in the app.
It's removed some artifacts at the expense of breaking the behaviour of the LOD bias control.
It doesn't. Negative LOD bias gives you shimmering on NVIDIA and ATI chips. One is just more sensitive than the other, but not incorrect.
Hold it. If you set a certain LOD bias, then there is a mathematical outcome that one expects. If it's a positive bias, then you should expect things to get blurrier by a certain amount. If it's a negative bias, then you should expect things to get sharper by a certain amount. If one platform is showing far more aliasing with negative biases
than is expected based on the mathematics, then that sounds like a platform issue.
Well, that's great - this mysterious 'possibly more accurate' formula whose existence is impossible to prove or disprove results in totally unnecessary shimmering when minor levels of negative LOD are applied. That sounds like a broken formula rather than a more accurate one to me.
I suggest reading my explanation again why this happens. If I'm correct then NVIDIA's anisotropic filtering is better optimized, not broken. ATI's implementation is less optimized but has the advantage of showing less artifacs when the application is badly written.
So now you're an expert on ATI and nvidia's AF implementations? Please explain.
You could always render at 3200x2400 resolution and downsample that to 800x600.
This has nothing to do with the current discussion.
There won't be any shimmering at all, but this just isn't efficient. I'm sure that ATI wished they had the same optimization so their performance was a few percent higher. And they would gladly add the fix to keep the LOD bias positive.
Ri-i-i-ight. What optimization are you referring to? Personally, I think you are making all of this up on the fly.
ATI hardware must be doing really well to have such good comparative performance with anisotropic filtering while at the same time apparently taking more samples than are strictly necessary to avoid aliasing.
Given the higher clock frequency and less features this isn't much of a surprise to me.
What a retort! "I can't refute what he says so I'll just ignore it and insult the competition instead!"
Of course they do - as the LOD bias becomes more negative, aliasing will gradually increase, however our hardware seems to have no need for this negative bias clamping, and our LOD bias control works in accordance with the D3D specification.
Coincidentally these mentioned games use a negative LOD bias that doesn't cause bad effects on ATI chips. Still, NVIDIA's method is totally in accordance with DirectX specifications.
Is it now? As I stated above, all of this stuff is controlled by mathematics. If your chip is not adhering to the proper formulas, then it should be considered broken.
There are formulas where you are allowed some leeway, and the LOD calculation happens to be one of them. (See the OpenGL specs for details.) However, there are limits to how much leeway you get.
If the older GeForce 4 and FX chips don't exhibit this problem, why should we all think that the 6800 is somehow better? (Obviously I am only referring to LOD calculations and not other features.) Mythical "optimizations" that
you dream up are of no interest.