euank said:
Well, would you care to clarify?
Sure.
I've got the impression with you referring to those articles that you know what you talking about, so that's why I wasn't too specific.
AF filtering level is reduced with respect to angle of the surface from the view point. If that's not adaptive what is?
The ground is not 0º with respect to the camera position is it? Looks like were looking dowm at an angle to me?
First a little 3D.
There's no such thing like an angle between a point and a plane.
You can (in this case) calculate the angle between a line section and a surface. The line section is between the view point and a point of the surface.
The first thing you should notice is that because perpective projection this angle changes from point-to-point.
Not viewing a (part of a) surface perpendicularly means that the texture suffers a distorsion. (This is a generally applied oversimplification assuming the texture is mapped without distorsion in the first place - this is probably true in a terrain like this.)
By distorsion I mean that the texture is scaled differently in different directions.
This is actually not a problem until the texture is magnified, but causes trouble when minification is required.
Texture minification can cause undersampling which has aliasing effects (moire and the like.)
The usual solution is to select a mipmap that isn't have to be minified.
Unfortunately non-isotropic scaling poses a problem, because to avoid aliasing too much detail is dropped.
One solution would be to store "mipmaps" scaled non-isotropicly in different directions, and use the best one. This method is usually referred as rip-mapping. It has serious shortcomings (requires too much memory, yet it can provide perfect solution only in special cases.)
The solution that is more generic, but in the same time more computationally expensive is to take multiple samples in the direction the texture is more squished. This oversamples the texture in that direction and therefore it allows the selection of a more detailed mipmap without aliasing.
The minimum number of samples needed is at least as much as the "squish ratio" of the texture or in other words the quotient of the largest scaling and the smallest scaling.
If the surface is perpendicular there's no anisotropic filtering required.
If the angle between the surface and the viewing line section is larger than 30° than 2x anisotropy is sufficient.
If the angle between the surface and the viewing line section is larger than 19.47° than 3x anisotropy is sufficient.
If the angle between the surface and the viewing line section is larger than 14.48° than 4x anisotropy is sufficient.
And so on.
So an implementation doesn't have to apply more than said levels of anisotropy, and an implementation that has fine grain detection of the required level is called
adaptive.
Don't worry nVidia's implementation is adaptive too!
As for the problems of R200 and R300.
First of all R200 has a problem that it turns on the required level of anisotropy too late which cases aliasing similar to LOD bias.
This has been fixed in R300, and so the performance mode of R300 has much higher quality than the R200 mode.
The other problem is however in ATI card is the way it detects the need for anisotropy, or in other words "how the texture is squised".
There are always two distinct directions with such a texture, the direction it is scaled the most and the direction that it scaled the least. (I'm talking about screen space directions now - so in 2D.)
These two important directions are always perpendicular and one of them is the directions the texture is needed to be sampled multiply times during anisotropic filtering.
The problem is with R200 that it can detect the correct level of anisotropy when these directions are vertical and horizontal. Between them the detected level of anisotropy drops until at 45° there is no anisotropy applied anymore.
With the R300 the detection is extended to include +/- 45° cases, so it's worst cases are at 22.5° and 67.5°.
But don't confuse these angles which are in 2D in the screen space and are measured relative to the X or Y axes of the screen, and the 3D angles between the viewing line section and the surface that is the source of anisotropy.
The easiest way of shifting those 2D angles is to rotate the screen around the center, which means rotating the camera around the Z axis. To determine this angle of a surface, project the surface normal to the screen and measure the angle beetween that and (say) the X axis of the screen.
Again, this angle is independent of the other angle that is the source of anisotropy.
Hope it explains the issue, feel free to ask questions, if it wasn't clear.