Now I know why I decided not to pursue a major in math after taking linear algebra.
You should think the opposite way. With that major in math, you would have understod how simple it realy is.
Everything is much clearer if you understand the meaning of what I wrote in my first post, so I'll give a more geometrical explanation.
Imagine that you project the screen pixel grid back to the texture (yes, that's backwards), then Lx=sqrt((du/dx)^2+(dv/dx)^2) (that occured a few times in my first post) will be the length of the side of a pixel in texture space.
Ly=sqrt((du/dy)^2+(dv/dy)^2) is the length of the other side. If you don't like the maths, just think of it as this projection instead.
The idea is now to select a small enough mipmap that the back projected pixel only cover one texel. Let's give some examples of how the back projected pixels look in texture space:
If there's problem loading the image, try this:
The quad represent one pixel, Lx and Ly are the lengths of the red arrows at the quads edges.
The first image could be from pcchens filtering demo when looking at the road in a 'good' angle. Notice that Lx is small and Ly large => the algorithm understand that it can use a high anisotropy. Also notice that the pixel is extended along one of the textures coordinate axises => rip-mapping is possible.
The second image could be from the 45Âº rotated view. This time both Lx and Ly are large even though the projected pixel is quite stretched => the algorithm fails to detect the anisotropy. Also notice that the pixel is still stretched along one of the texture coordinates => if a better algorithm to detect the anisotropy were used, rip-mapping would still work.
The third image could be from Bambers example far away on the ground. Lx is small and Ly large => the algorithm understands the high anisotropy. But the pixel is stretched diagonaly wrt the texture coordinates => rip-mapping wouldn't work.
The fourth image is from the area above the fire in your example. (The methodology was to look at the image, and imagine a back projected pixel.) Same conclusion as for image three (except that Lx is large and Ly is small). And as Bambers just hinted, I should've used a different formulation on what you quoted. As soon as we've seen that rip-mapping isn't used, the texture orientation doesn't matter.
In short, if you want to see:
if it's rip-mapping or not, compare case one and three.
if they detect anisotropy in a good way, compare case one and two.
In any case, the effects are most visible if the texture has lines in the same direction as the pixels are stretched, or in other words away from you.
Some problems with the image.