Current AF not actually Ainsotropic?

kyleb

Veteran
I ran across this claim on another forum and when I questioned it I was asked to invsegate the situation further; so I am interested in what everyone here has to say about this?
The correct version of AF is sampling is non square. That is by definition of the word. Check out the filtering/color wheels numerous sites post and compare current parts to one of the boards that does it properly- they aren't close.
Anisotropic filtering has a definition- non square filtering. There isn't an IEEE standard associated with it. In order to properly perform anisotropic filtering the sampling must be non square by definition. I mention the color wheels as you will see that newer parts regularly fail to maintain this standard across the scene. They are by definition of the word not doing it properly.
Look up how LOD bias selection works on various hardware- talk to the driver teams about how it is that a LOD bias selection is chosen and ask them if this is fixed or flexible depending on the samples taken and then apply that information to the color wheels. You'll figure it out pretty quick.
How do you all respond to that? :D
 
Current 3D cards do not use square in AF. If so, you won't see any AF effects. They are not smiple LOD bias either, otherwise you'll see only aliasing, not AF effects. No, the AF in most 3D cards are not perfect, but they are not that bad.
 
GF 6 & 7 cards manage to do 2x AF correctly, and Radeon 9500+ cards manage to do 2x AF correctly for over 95% of the screen.
It's not achievable with a square filtershape.

The imperfections seen is failure to do higher level of AF at some angles, but at least 2x AF works reasonably well.

(Older Radeon cards dropped down to no anisotropy at some angles but I'd say that's the past...)
 
Trouble being that 2x sample AF is ridiculously low for today's standards; a bit exaggerated, but it's not that much better than garden variety trilinear.
 
Hyp-X said:
GF 6 & 7 cards manage to do 2x AF correctly, and Radeon 9500+ cards manage to do 2x AF correctly for over 95% of the screen.
Could you elaborate on what exactly is happening on that other 5%?

And yeah, I agree that the current AF implmentations are rather lacking in various situations. However, I suppose what I am really interested in is understanding is; what exactly is meant by "non-square" in reguards to texture filtering?
 
He might mean that the R300's have some "ears" in filtering testing programs (at 45 degree angles?) which is only a slight difference to 2xAF or else it's a tad more angle-dependent as on the other GPUs he mentioned. I doubt many could see a difference though in real-time.
 
Yeah, that is what I gathered too; but do those "ears" controdict the definition of "aniostropic" in any way?
 
kyleb said:
Yeah, that is what I gathered too; but do those "ears" controdict the definition of "aniostropic" in any way?

No why should they? Any sort of angle-dependency doesn't go against the definition of AF.
 
I just ask because I am looking for insight into on the comments I quoted in my original post; specifically:
Anisotropic filtering has a definition- non square filtering.
...
They are by definition of the word not doing it properly.

And again I am curious as to what exactly is meant by "non-square" in regards to texture filtering.
 
And again I am curious as to what exactly is meant by "non-square" in regards to texture filtering.
It means the sample footprint in texture space is not (typically) a square. Instead, it's usually a convex quadrilateral.

The footprint can be a square, in some specific cases (for example, when drawing something with no perspective).
 
Warning: I'm probably biting off more than I can chew in trying to be helpful (read: misusing terms and mangling definitions), but if I'm (mostly) right, then maybe this can save someone else some typing.

I believe anisotropic filtering means that the filter takes more samples from one axis than the perpendicular one. Textures at angles other than perpendicular to the screen, like those on a street running into the horizon, will have more texture detail per pixel in the vertical axis. Bilinear and trilinear, however, apply the same amount of filtering to both axes, so the road texture appears to lose detail (looks blurrier) both the farther away it is and the greater of an angle away from perpendicular to the camera it is because those filtering methods undersample in the vertical axis relative to the Nyquist limit, thus skipping over some visual detail evident in versions of the texture closer to the screen and thus with a lower pixel-to-texel ratio. Anisotropic filtering, thus, should take more samples in the axis that requires it to avoid losing too much texture detail relative to another angle.

I also believe that the angle-dependent AF both ATI and NV use still provides at least 2xAF at all angles, tho the angle-dependence means that when you ask for, say, 16xAF, you're not getting that level of AF on all textures, just the ones at angles the IHV has determined to require the most AF.

It sounds to me like those posters think that the definition of AF also includes applying the same user-defined level of AF to all textures or angles, which I don't believe is the case. (If it were called full-scene AF, tho, they may have a point.) Anisotropic filtering just concerns itself with unequal/nonsquare/asymmetric filtering, depending on the angle of the texture. I don't think it concerns itself with the entire screen, just one texture.

I'm sure either Kristof or someone else here can explain it more clearly. The pictures on this SGI page made the :idea: go off in my head WRT AF. Keep in mind that the author isn't describing hardware-specific AF but rather using MIP maps to approximate AF. Still, the MIP maps pictured should give you an idea of what's happening: more filtering on one axis and less on another, depending on the angle of incidence from the camera to the texture-mapped polygon.
 
I also believe that the angle-dependent AF both ATI and NV use still provides at least 2xAF at all angles, tho the angle-dependence means that when you ask for, say, 16xAF, you're not getting that level of AF on all textures, just the ones at angles the IHV has determined to require the most AF.

Small addition if I may: textures get analyzed from the algorithms and receive as many AF samples as needed according to their "steepness"; there on two angles textures receive anything between 1x and 2xAF and on the remaining two angles between 1x and 16x samples (albeit I doubt that the maximum of samples applied in the latter case is that much higher than 6x samples).
 
If graphics hardware used square filters for anisotropic filtering (which is possible), they'd have tremendous performance hits for even low degrees of anisotropy, because they'd be wasting tons of samples.

Anisotropic filtering looks good and doesn't cost huge amounts of performance because non-square filters are used which attempt to approximate the shape of the texel in pixel space. Current graphics hardware doesn't use the best possible positioning of the texture samples, though, and so you get the angle dependence.
 
Thanks for all the responces everyone, I feel quite a bit more comfortable with my understanding of AF.

And Pete, that SGI page you linked brought me way back; I remember reading that exact page back on my Voodoo5 and wondering what the stuff really looked like in action. How times have changed. :)
 
Heh, then you saw it before I did. I probably read it on a Radeon 9100, which I bought specifically for its "cheap" AF. :) I found it in a thread H@ wrote describing AF, in the Ars A/V.
 
Back
Top