Anistropic Debate

Now I know why I decided not to pursue a major in math after taking linear algebra.

You should think the opposite way. With that major in math, you would have understod how simple it realy is. :)


Everything is much clearer if you understand the meaning of what I wrote in my first post, so I'll give a more geometrical explanation.

Imagine that you project the screen pixel grid back to the texture (yes, that's backwards), then Lx=sqrt((du/dx)^2+(dv/dx)^2) (that occured a few times in my first post) will be the length of the side of a pixel in texture space.
Ly=sqrt((du/dy)^2+(dv/dy)^2) is the length of the other side. If you don't like the maths, just think of it as this projection instead.

The idea is now to select a small enough mipmap that the back projected pixel only cover one texel. Let's give some examples of how the back projected pixels look in texture space:
antialiasingstretch.gif

If there's problem loading the image, try this:
http://hem.passagen.se/basic3/fora/beyond3d/antialiasingstretch.gif

The quad represent one pixel, Lx and Ly are the lengths of the red arrows at the quads edges.

The first image could be from pcchens filtering demo when looking at the road in a 'good' angle. Notice that Lx is small and Ly large => the algorithm understand that it can use a high anisotropy. Also notice that the pixel is extended along one of the textures coordinate axises => rip-mapping is possible.

The second image could be from the 45º rotated view. This time both Lx and Ly are large even though the projected pixel is quite stretched => the algorithm fails to detect the anisotropy. Also notice that the pixel is still stretched along one of the texture coordinates => if a better algorithm to detect the anisotropy were used, rip-mapping would still work.

The third image could be from Bambers example far away on the ground. Lx is small and Ly large => the algorithm understands the high anisotropy. But the pixel is stretched diagonaly wrt the texture coordinates => rip-mapping wouldn't work.

The fourth image is from the area above the fire in your example. (The methodology was to look at the image, and imagine a back projected pixel.) Same conclusion as for image three (except that Lx is large and Ly is small). And as Bambers just hinted, I should've used a different formulation on what you quoted. As soon as we've seen that rip-mapping isn't used, the texture orientation doesn't matter.


In short, if you want to see:
if it's rip-mapping or not, compare case one and three.
if they detect anisotropy in a good way, compare case one and two.

In any case, the effects are most visible if the texture has lines in the same direction as the pixels are stretched, or in other words away from you.

Edit:
Some problems with the image.
 
Basic,

Bambers posted a shot on the previous page that shows Ripmapping might not be what the 8500 is doing, what are your thoughts on this. I know ATI uses some form of Adaptive Anistropic so is this the result were seeing here ?
 
I believe that rip-mapping is a hack that is rather useless except when you're guaranteed to look in one direction. This could work for the road in a driving sim, or the runway in a flight sim. But in more free environments, it wouldn't help that much.

The picture Bambers posted looked like there was no rip-mapping, but it wasn't a very high contrast texture. It would be easier to see if the floor texture had bright lines runing 45º against the texture axises, and then view it along those lines.

My first idea about what ATI was doing (long time ago) was that they do some similar sampling to what nVidia does. Taking multiple bilinear samples along the texture in the direction the pixel is stretched. (Nvidia can of course make trilinear samples instead.) But maybe sampling to sparse, and that way get the speedup and lesser quality.

But I've only got a GF2, and haven't looked much into it. (And don't think I've got time to do it either.)
 
Althorin:
To quote what someone said earlier in this thread :LOL: :
Ahh, the main downfall of rip-mapping,....angle funkiness :)
aniso effect decreases towards 45degrees, then gets to no effect, then goes back up to full at 90degrees, repeat.

Only problem was that the quote were refering to angles in wrong coordinate system. The planes angle relative the screen axises aren't important wrt if rip-mapping works well. The important angle is between the direction a projected pixel is stretched and the texture axises.

The ground in Bambers image were viewed in the bad angle for rip-mapping, but still looked rather good. Or did you mean that you don't think it looks good? Then I agree that it would be clearer with a texture with better contrasts.
 
Bambers said:
To find a visible difference in aniso methods between the 8500 and gf3/4 you need no stand next to a wall, look straight up and turn so that the wall is going diagonally across the screen. The orientation of the texture makes no difference.

hehe... now, i dunno if that was said in jest or not, but if it was meant seriously, then all I have to say is.... I'll take ATI's implementation any day.

It takes less than 1/3 the speed hit vs. doing it the "proper" way as Nvidia does... and yet for the majority of cases it looks as good or better.

However, i'd temper that with the fact that I would like to have the Option to use the "proper" method if the performance hit wasn't prohibitive in a particular app. (my minimum resolution for game playing is 1024x768... even with really great AA, anything below that is unacceptable).
 
well if you can see Sharks pictures you can see (for that small section anyway) that the aniso is doing a great job and I concur with Nil, Shark's pictures look sharper at all degrees of aniso. This is Gf2 though rather than Gf3/4 aniso right?
 
Something has been puzzling me about the Radeon 8500s anisotropic filtering settings for a couple of days now...

In tweaking programs, such as Radeonator, you can set the level of filtering from 2x all the way up to 128x. It's generally assumed that the "64x" (for example) refers to the maximum number of samples taken, or in other words, number of taps.

The R200 doesn't do trilinear anisotropic filtering - if that's the case, then how does it do 128 or even 64 taps? The setting goes through the level of anisotropic samples yes? What is the maximum number of taps per ani sample it can take?
 
The 128x notation is not 128tap, it's a maximum level of anisotrophy of 128, the registry key in question is OGLMaxAnisotrophy. If implemented the nVidia way a 128x anisotrophy would be the same as 1024tap. The reality is that the Radeon supports up to 16x anisotrophy (not to be confused with 16tap), but when this key is set to something higher some driver trick kicks in. What the driver is doing though I don't know, but it does have an effect to turn it up to higher than 16.
 
Ichneumon said:
Bambers said:
To find a visible difference in aniso methods between the 8500 and gf3/4 you need no stand next to a wall, look straight up and turn so that the wall is going diagonally across the screen. The orientation of the texture makes no difference.

hehe... now, i dunno if that was said in jest or not, but if it was meant seriously, then all I have to say is.... I'll take ATI's implementation any day.

I think you're missing the point. This is the Ichneu's recommended way to look for the difference i.e. the easiest way that he/she knows of and probably the way that shows the difference the clearest. This does not mean you won't notice it in other cases, it's just that it might not be so clear or it may not be in all scenes or may only occur in a specific game or whatever. What I'm saying is it probably is noticeable in other games (especially non FPSes) to some extent (although how much is a debate which will probably never be solved).

If you still don't get it go look at my artifacts post. You will see lots of suggestions which involve running the card for a very long period of time. Normally, I'm unlikely to ever to this however these method are probably the simplest and easiest way and would produce the most visible artifacts. If I had to go to such extremese to get artifacts, I wouldn't worry but the fact is I don't have to. I will likely get artifacts in other cases but they may not be so noticeble. When I'm overclocking my card, I generally want a way which is easy and virtually guaranteed to produce the desired results...
 
Nil. I dont see the link here. We are talking about aniso quality, what has that to do with artifacts generated by heat & overclocking?
 
Humus - just to clarify with what you are referring to what you say "level of anisotropy". Is this just a driver setting a la NVIDIA (eg. level 8 is 64-tap anisotropic trilinear filtering) figure or is it an actual measurement of the amount of anisotropy?

Do you have any more information concerning the number of sample points taken during anisotropic filtering on the R200? You mentioned the change after 16x - what this be an alteration of the sampling pattern and/or number of samples taken?

Edit: something else I've just thought of Humus, since the R200 can't do trilinear anisotropic filtering, if the card does the setting the same way as a NVIDIA card does, then it would only be 512 taps and not 1024.
 
"Level of anisotrophy" is the ratio length/width of the sampling window, it should be selected by hardware on a perpixel basis but is constrained by a maximum level of anisotrophy which can either be set by the application or by the user in the driver, in this case by setting a value to the OGLMaxAnisotrophy setting. With a level of anisotrophy of 2 you thus get a 2x4 sampling window, that's 8tap, or 16tap with trilinear. Because of this and since the level of anisotrophy varies per pixel the tap notation doesn't really make any sense. The only real measurement of anisotropic filtering is thus the maximum level of anisotrophy, even though it doesn't garantuee any kind of quality or implementation.

I don't know though how the 8500 does samples, nor do I know what driver trick kicks in after 16x, but some ATi guy confirmed some time ago that it was something done by the driver at least I think.

And yes, if it did it the same as nVidia cards, but still only did bilinear with anisotropic, then it would only be 512tap.
 
I don't know though how the 8500 does samples, nor do I know what driver trick kicks in after 16x, but some ATi guy confirmed some time ago that it was something done by the driver at least I think.

In Serious Sam the 32/64/128 levels are visible, but if any of them are selected and applied it drops back to 16x.
 
Yeah, if you ask the driver what maximum level of anisotrophy it supports it'll say 16. I'm certain that if you try to set it to anything above 16 and then query the current value it'll still say 16, even if any driver tricks have kicked in.
 
Well not like I want to kick a dead horse but saw something in SS:SE last night that reminded me of the program that we use to look at the rip mapping. Any ways check these out:


No rotation:
ansi_norot_SSSE.jpg


Then when the floor moved:
ansi_rot_SSSE.jpg


And yes I was too much of a Nacy boy to stay on the moving platform so I when to the door and fought them there. Not 100% sure of the level but its one at the first part of SS:SE...

Anyways I noticed it going nice, blurry, nice, blury just like in that texture filtering demo :)
 
Good example jb

I found that map. Anyone wishing to try this, load up SS:SE, One Player, Custom level and load "The Pit"- the rotating floor level is about 6 or 7 rooms past the level begin in the water (just set 'please god' and chainsaw your way through). It's the room right after the bouncy headbomber room.

It appears to be the same with D3D as well as OGL.
 
Yea just thought it was neat. Again I would perfer it to stay clear all the time, but its good enough for me until the next gen comes around :)
 
Yes that was a good example, on my GF3 the floor stays sharp all the time. I hope ATI finds a better method in the future without the performance hit that the Nvidia cards have.
 
Back
Top