There is nothing magical about LOD 0.
There isn't even anything special about LOD 0.
It is not the mystical point at which no aliasing occurs. Perhaps with an infinitely wide filter kernel it would be, but we don't have that.
If you feed certain textures into any current piece of 3D hardware (of which I am aware) at LOD 0 you can generate aliasing.
If a piece of hardware aliases worse at LOD -0.5 than another piece of hardware then by extension it will also alias worse at LOD 0.
Nick said:
Don't get me wrong. Taking 'too many' samples still provides higher quality due to filter imperfections and such, but this is negligible compared to taking the minimum required number of samples at the correct LOD bias. Nyquist's rate is two, anything more is theoretically wasted. In practice with a tent filter it still can be worth it to avoid minor artifacts but at this point it becomes totally subjective and it's outside the specifications. If you absolutely think this is required, ATI is the better choice. If you just want mathematically sound filtering then NVIDIA is simply flawless, although it sometimes requires the application fix
That's fine, but the problem is that all hardware is already effectively taking too few samples at LOD 0 due to the use of a linear filter, not the 'minimum required number' - you and I have both pointed this out in this thread, and yet in the same paragraph you also claim that NVIDIA's filtering is flawless because it is taking
even less samples than prevous implementations.
Maybe their filtering is flawless, but the same can hardly be said for this line of reasoning. Naturally you would get increased performance from taking less samples, but this would come at the expense of increased aliasing all the time when compared to older generations. Hardly a step forward in terms of quality I should think.
If you want mathematically flawless filtering then you aren't going to get it with a linear filter at LOD 0.
A sinc filter is far better at reconstruction than a linear filter - the artifacts are not "totally subjective", they are visible, and they show up as aliasing. I could just as easily argue that all applications are bugged because they don't set a LOD of +1.0 - this would eliminate even more aliasing than clamping to 0.
Assuming that nV40's LOD bias control is producing a linear bias of the mip-map calculation as it should (ie. +1 moves you exactly 1 step down the mip chain) then it is inescapable that if it is aliasing more at LOD -0.5 then it is also aliasing more at LOD 0.
Is the shimmering gone with the LOD clamp set, or has it just reduced to about the same level as that in R4xx with the LOD set to -x.y as the application requested?
The proof is exactly the shimmering and the fact that clamping the LOD bias solves it
That is not proof of anything, and certainly is no proof of your theory - the only thing this tells us is that the shimmering is reduced, and of course the shimmering is reduced at LOD 0 compared to LOD -0.5. There would be even less shimmering at LOD 1,
but that doesn't prove anything either except that LOD bias is functioning.
Your whole argument is that NV40's solution is more accurate and takes the minimum number of samples and therefore aliases worse. It is completely interchangable with an argument that other hardware was already taking the minimum number of samples (or, in fact, rather too few samples according to Nyquist) and that the NV40 solution is therefore simply deviating even further from producing the correct sampling by taking even fewer samples even at LOD 0.
Both theories would give rise to increased aliasing on NV40 at negative LODs, and both would be affected the same way by clamping the LOD to 0.
So, does NV40 alias more at LOD 0 than other hardware?
If it does alias worse, then the clamp to LOD 0 is just mitigating the symptoms. Perhaps a clamp to LOD 0.5 would be better so that users don't get subjected to more aliasing at the standard LOD setting than on other hardware? Might make some textures a bit blurry though.
If it doesn't alias worse than other hardware at LOD 0 then the question is - why
does it alias so much worse at negative LOD, when the relationship between the amount of aliasing and the negative LOD bias applied should be basically linear?