Anistropic Debate

hmm.... well, in case the user supplies the mipmaps, why wouldn't they just generate the rip maps from each new mip level? You've got the 256x256 original, so generate your 256 texel wide rip maps from that, generate your 128 texel rip maps from the 128x128, and so on...
 
There's no actual guarantees that the supplies mip-level's will be a precise filter - it's probably best to generate the ripmaps directly from the texture to ensure there's no errors.

Plus, the rip-map is only filtering in one direction, wheras pre-supplied mip maps would already be filtered in two.
 
Darkblu is right, throughout the entire rotation of the road the textures are lined up perfectly for rip mapping. In fact ATIs implementation is even fine at angles where the u,v directions of the texture are at 45 to the screen.

Look:

q3aniso.jpg


Note missing ammo box as proof of being taken on an 8500 :-?

That shot is anisoed very nicely even directly ahead where the texture is at 45 degrees, the reason atis implementation (and the one at sgi) goes wrong (regardless of the texture orientation) at the 45 degrees of the road demo is because they use the screen axis to detect the ratio of the texture needed not the true derivative axes.

When the road is at 45 degrees then the u and v texture distance covered by the height and width of the pixel will all be the same. With only this limited detection system there is no way to tell with the result above whether the suface is parallel or at an angle to the screen plane so normal mip maps are used.

For games like q3 and most other FPS then (apart from atis lack of trilinear noticable on a few textures) you'd find it difficult to tell atis aniso from nvidias.
 
rhink said:
hmm.... well, in case the user supplies the mipmaps, why wouldn't they just generate the rip maps from each new mip level? You've got the 256x256 original, so generate your 256 texel wide rip maps from that, generate your 128 texel rip maps from the 128x128, and so on...

Well, if the app provides the 256x256, 128x128, 64x64 ect, where would the driver get for instance the 256x128 level from? With you method the only mipmap level that would make sense is the 256x256 level. For the 256x64 you'd still need to downsample it from the 256x256, this goes down even to 256x1, which means only the base mipmap level is used. But if this is what's done then color mipmapping wouldn't work, but obviously it does as seen in Q3 and SS etc, so it cannot be handled this way. The way I see it this must mean that it cannot be using ripmapping, but something similar that uses normal mipmap levels.
 
Yeah. From the screenshots provided by Bambers, it does not look like ripmap.

So it is possible that R200 does not use ripmap, but has some strange restriction on kernel shape, perhaps it only handles rectangular shapes well (the screenshots provided by Bambers has most footprint shape as rectangle or trapezoids, but the rotated "road texture" program has most footprint shape as diamond).

Another possiblity is scanline based LOD selection, which exhibits similar results.
 
Would be nice if somebody from ATi could give us the final answer on this!

Atleast one person who frequents this forum works for ATi. Come on time to spill the beans ATi people! :D
 
I would be interested to see how anisotropic filtering on the Radeon 8500 renders the target area in Scene 2:

Scene 1:
position_1.jpg


Scene 2:
position_2.jpg


Target area in Scene 2:

Trilinear:
trilinear_angle.jpg


2X Anisotropic:
2x_anisotropic_angle.jpg


4X Anisotropic:
4x_anisotropic_angle.jpg
 
Bambers said:
Darkblu is right, throughout the entire rotation of the road the textures are lined up perfectly for rip mapping. In fact ATIs implementation is even fine at angles where the u,v directions of the texture are at 45 to the screen.

Look:

http://www.jamesbambury.pwp.blueyonder.co.uk/q3aniso.jpg

Note missing ammo box as proof of being taken on an 8500 :-?

That shot is anisoed very nicely even directly ahead where the texture is at 45 degrees, the reason atis implementation (and the one at sgi) goes wrong (regardless of the texture orientation) at the 45 degrees of the road demo is because they use the screen axis to detect the ratio of the texture needed not the true derivative axes.

When the road is at 45 degrees then the u and v texture distance covered by the height and width of the pixel will all be the same. With only this limited detection system there is no way to tell with the result above whether the suface is parallel or at an angle to the screen plane so normal mip maps are used.

For games like q3 and most other FPS then (apart from atis lack of trilinear noticable on a few textures) you'd find it difficult to tell atis aniso from nvidias.

I noticed this two, and that's why I was very skeptical that ATI did rip-mapping. I also have a semi-confirmation that ATI walks along the texture rather than pre-filter it like rip-mapping does. The very strange thing is the 45-degree angle thing on the camera roll axis.
 
darkblu said:
Althornin said:
Ahh, the main downfall of rip-mapping,....angle funkiness :)
aniso effect decreases towards 45degrees, then gets to no effect, then goes back up to full at 90degrees, repeat.

the angle between the u/v-axis and the viewplane is still the same in both the ground-aligned and the 45degr rolled shot - nothing should have changed in regards to rip-mapping. the exhibited effect seems to me to be more a matter of funky |du, dv| -to- dx/dy ratios. ..which brings memories of talks of r200 not doing proper per-pixel mip-lod selection.

I totally agree with this. The 45-degree rolled shot should have no effect on the texture mip-map or aniso decisions whatsoever. Whether rip-mapping is used or true anisotropic filtering is used, the roll axis should be fine. What could possibly be causing this? You're probably right in the way you say that the du, dv, dx, and dy quantities and ratios are not considered properly by the R200 hardware.

The good thing is that nearly all games use vertical or horizontal surfaces for the objects that really need anistropic filtering. Even though there isn't trilinear filtering, I think ATI's method makes the mip-map boundaries mostly unnoticeable because the mip-map are just downsampled textures anyways, so you can calculate the texel for the next mip-map by just sampling the current texture in adjacent places. It's not even close to the mip-map problem you get with bilinear filtering on its own.
 
OpenGL texture minification for nearest/bilinear/trilinear filtering works like this:

(u, v) = texture cordinates
(x, y) = screen coordinates

(I omitted some scaling.)

rho = max( sqrt((du/dx)^2+(dv/dx)^2), sqrt((du/dy)^2+(dv/dy)^2) )
lambda = log2(rho)

Rho and lambda is calculated per pixel. But aproximations that are less computationally expensive is alowed. The integer part of lamda+K is used for mipmap selection, and the fractional part for weighting the two mipmaps in trilinear. (I don't care to describe exactly what K is, it's just a small constant.)

One 'natural' way to extend that idea for anisotropic filtering would be to calc another value (I just changed a 'max' to 'min'):

rho2 = min( sqrt((du/dx)^2+(dv/dx)^2), sqrt((du/dy)^2+(dv/dy)^2) )
lambda2 = log2(rho2)

Use lambda2 for mipmap selection, and rho/rho2 is the level of anisotropy.

Too bad that it doesn't work in all angles. It gives the same kind of errors that is shown in this thread. But I'd guess that this is what is done, possibly with some aproximation of sqrt((du/d*)^2+(dv/d*)^2).

What should be done is to set rho and rho2 to the large respectively small absolute values of the eigenvalues to the matrix:
[ du/dx du/dy ]
[ dv/dx dv/dy ]

While I understand that they don't want to do it exactly that way, I still hope they can find a better aproximation than what they use now.
 
Shark food

I've you're having probs with sharkfood try refreshing the pictures until they work. Worked for me :) Or maybe download the pictures manually.

Have to say that the Sharkfood's Radeon 8500 seems better at both 2x and 4x and trilinear altho it's hard to be sure with one small picture....
 
MikeC:
Were you expecting some bluring like on the road in the texture filtering demo? The difference here is that one of the screen axises (y) is paralell to a texture axis that isn't compressed a lot. So if I may refer to my previous post, you still got a rho2 that is significantly smaller than rho.

Furthermore, if you're looking for bluriness in the vertical lines above the fire, then you're looking at lines that goes in the wrong direction to reveal errors from lack of anisotropy.
 
But it does go to show you be it rip mapping or what not, its better than nothing :) And pretty darn good in some cases
 
Basic said:
MikeC:
Were you expecting some bluring like on the road in the texture filtering demo?

Yes. But I'll have to read this thread closely to determine why.

Basic said:
The difference here is that one of the screen axises (y) is paralell to a texture axis that isn't compressed a lot. So if I may refer to my previous post, you still got a rho2 that is significantly smaller than rho.

Now I know why I decided not to pursue a major in math after taking linear algebra :)

If I may ask, how were you able to determine "that one of the screen axises (y) is paralell to a texture axis that isn't compressed a lot." Was it a result of looking at the image or was there some other underlying methodology? How would the scene have to change in order to see the differences in the anisotropic filtering methods?
 
To find a visible difference in aniso methods between the 8500 and gf3/4 you need no stand next to a wall, look straight up and turn so that the wall is going diagonally across the screen. The orientation of the texture makes no difference.
 
Back
Top