Radeon 8500 Aniso vs Geforce 4 Aniso

Yeah, I checked, and it's still doing aniso on isotropic surfaces. Really disappointing...

It's almost as if nVidia's using the same aniso for all surfaces that are either oblique-angled, or face-on.

As a side note, it should be obvious (To anybody with some knowledge of vector math...) that the anisotropic quality should be based on a dot product between a vector normal to the surface, and a vector pointing straight out toward the screen. This way of selecting aniso LOD would result in no preference over any angle.

However, the way that OpenGL and Direct3D tell the card to do aniso is very different. They just look at the hight/width ratio of the texture. This makes it very hard to check for oblique-angled surfaces, and thus very easy to do aniso like ATI does it.
 
Chalnoth said:
As a side note, it should be obvious (To anybody with some knowledge of vector math...) that the anisotropic quality should be based on a dot product between a vector normal to the surface, and a vector pointing straight out toward the screen. This way of selecting aniso LOD would result in no preference over any angle.
Not quite right. Consider a scenario with a textured cube in the upper-left corner of the screen, with the front side lined up parallel to the near/far clip planes. Obviously, three sides of the cube will be visible (bottom and right side visble due to perspective). Also obviously, the front face should be rendered without any form of anisotropic mapping. Now, by 'vector pointing towards the screen', do you mean vector pointing from they eye to the center of the screen or a vector pointing from the eye to the individual pixel? In the former case, you get a dot product of about 0 for the visible bottom and right faces of the cube, indicating infinite (or in any case much too large) anisotropy where the correct result is rather finite. In the latter case, you get as a result that the front side of the cube requires anisotropic mapping, whereas the correct result would be 100% isotropic.

A better method is this: Consider the pixel on the screen to be a circular disc. Now project this circle onto the texture that you wish to apply - the result will be an ellipse in texture space. Now, find the major axis of this ellipse in the texture space, project this axis back to screen space, and do multiple texture samples along this axis.
 
arjan de lumens said:
A better method is this: Consider the pixel on the screen to be a circular disc. Now project this circle onto the texture that you wish to apply - the result will be an ellipse in texture space. Now, find the major axis of this ellipse in the texture space, project this axis back to screen space, and do multiple texture samples along this axis.

Yes, that would work...but the dot product system would work as well. Don't forget that a vector pointing towards its pixel representation on the screen and the 'eye' should be one and the same (The 'eye' should not be the center of the screen, but instead some imaginary point set back from the screen). Additionally, as long as correct perspective is used, any isotropic surfaces on the screen would be seen in the math to be directly facing the 'eye.'

The problem with the ellipse method is that it requires an extra transformation on a per-pixel basis, a transformation equivalent to transforming at least one vertex without any lights (The ellipse case requires even more...). There may be simplifications based on doing this on a per-fragment basis...
 
Chalnoth said:
Yes, that would work...but the dot product system would work as well. Don't forget that a vector pointing towards its pixel representation on the screen and the 'eye' should be one and the same (The 'eye' should not be the center of the screen, but instead some imaginary point set back from the screen).
OK, so the vector in question would be the vector pointing from the eye position, set back from the screen, as you say, to the individual pixel.
Additionally, as long as correct perspective is used, any isotropic surfaces on the screen would be seen in the math to be directly facing the 'eye.'
In that case, you are not taking into account that the vector from the eye to the screen is not perpendicular to the screen except at the screen center. So that a surface facing the eye would map anisotropically to the screen. This effect becomes particularly severe when a very wide field of view is used, like the ~130 or so degrees used in some Parhelia triple-monitor demonstrations.
The problem with the ellipse method is that it requires an extra transformation on a per-pixel basis, a transformation equivalent to transforming at least one vertex without any lights (The ellipse case requires even more...). There may be simplifications based on doing this on a per-fragment basis...
It is computationally intensive alright - but it is possible to do approximations with good results, also many of the actual calculations can be shared with calculations done for per-pixel mipmapping.
 
Yes, I suppose that's true...but it does make me wonder, is this why nVidia's aniso is incorrectly using aniso too often?
 
DaveBaumann said:
I'm not sure I agree with that. AFAIK the point of anistropic filtering is to increase texture clarity at acute angles - what the point of increasing the quantity of texture sampling where on non-acute textures i.e. ones that are just facing the viewport?
What's an acute angle?

I believe the "original point" of AF is to counteract/cure excessive texture blurring with trilin/bilin.
 
Arjan, now that you mentioned it, I'd love to know whether Parhelia's aniso adapts to the intentional oblique angle of the side displays in surround gaming. Any info?
 
Chalnoth said:
Yes, that would work...but the dot product system would work as well. Don't forget that a vector pointing towards its pixel representation on the screen and the 'eye' should be one and the same (The 'eye' should not be the center of the screen, but instead some imaginary point set back from the screen). Additionally, as long as correct perspective is used, any isotropic surfaces on the screen would be seen in the math to be directly facing the 'eye.'

Why not just compute dot product between normal to the surface and a normal to the screen?
this gives 0 if surface is parallel to screen, and 1 if perpendicular. Higher the number, the more samples - at 1, surface isnt even displayed, so no samples are even nessesary.
 
Actually, the best method would be to use the hardware that already exists for determining MIP levels (or, the hardware that would be used if a vendor uses the OpenGL specification recommended method of determining MIP levels): take the partial derivatives of the texture coordinates with respect to window coordinates. This gives you du/dx, du/dy, dv/dx, dv/dy, which you can use to generate a line in texture space for any incident angle with no additional work.

The circular disc method would yield the best image quality; however, the added work in projecting the circle onto the plane, and then computing a volume-weighted filter corresponding to the amount of volume each texel constitutes in the cone wouldn't yield very good framerates with current silicon manufacturing processes.
 
gking said:
The circular disc method would yield the best image quality; however, the added work in projecting the circle onto the plane, and then computing a volume-weighted filter corresponding to the amount of volume each texel constitutes in the cone wouldn't yield very good framerates with current silicon manufacturing processes.

umh, wouldn't forward-mapping through hyperbolic interpolation along the major and minor axes of the ellipse basically do the job?
 
Althornin said:
Why not just compute dot product between normal to the surface and a normal to the screen?
this gives 0 if surface is parallel to screen, and 1 if perpendicular. Higher the number, the more samples - at 1, surface isnt even displayed, so no samples are even nessesary.

Correction: the dot product will be 1 if the (unit) nomal vectors are parallel, and 0 when they are perpendicular. Also, this method only works when you are doing parallel projection, not when you are doing perspective division. To see why, consider the a scene where you are watching down a long corridor. In this scene, the (eye-space) normal vector to every wall/floor/ceiling is perpendicular to the normal vector of the screen, giving a dot product of 0, thus indicating erroneously that none of the walls/floor/ceiling of the corridor should be drawn at all.
 
gking said:
Actually, the best method would be to use the hardware that already exists for determining MIP levels (or, the hardware that would be used if a vendor uses the OpenGL specification recommended method of determining MIP levels): take the partial derivatives of the texture coordinates with respect to window coordinates. This gives you du/dx, du/dy, dv/dx, dv/dy, which you can use to generate a line in texture space for any incident angle with no additional work.

That seems to be what the 8500 is doing at the moment and only works optimally on horizontal and vertical surfaces.
 
Both the dot product and the cross product could work, but the dot product ends up being better in this case because you can automatically throw out back-facing surfaces.
 
That seems to be what the 8500 is doing at the moment and only works optimally on horizontal and vertical surfaces

I wouldn't be so sure. If the 8500 is using the partial derivatives (which I'm not sure it does), it is ignoring the smaller component, which is why the 8500 only samples on lines parallel with U or V. Using all the components of the partial derivative would yield lines in texture space of any slope, not just along the principle axes.

Also, this system generalizes well for 3D and cubic textures, and the R200 can't do MIP mapping or anisotropic filtering on those.

dot product ends up being better in this case because you can automatically throw out back-facing surfaces.

The issue with the dot product is that while it should yield a value corresponding to the number of samples you need, it doesn't indicate the line of anisotropy. If you want to use dot products, you would probably be better off using the surface tangent and surface binormal vectors, rather than the surface normal, since T and B are (normally) defined relative to the U and V texture space axes.

umh, wouldn't forward-mapping through hyperbolic interpolation along the major and minor axes of the ellipse basically do the job?

You would still need to compute the weights for texels that aren't completely contained in the ellipse, if you want to do it properly, and that requires a whole lot more silicon than just taking the arithmetic mean of all samples.
 
arjan de lumens said:
Althornin said:
Why not just compute dot product between normal to the surface and a normal to the screen?
this gives 0 if surface is parallel to screen, and 1 if perpendicular. Higher the number, the more samples - at 1, surface isnt even displayed, so no samples are even nessesary.

Correction: the dot product will be 1 if the (unit) nomal vectors are parallel, and 0 when they are perpendicular. Also, this method only works when you are doing parallel projection, not when you are doing perspective division. To see why, consider the a scene where you are watching down a long corridor. In this scene, the (eye-space) normal vector to every wall/floor/ceiling is perpendicular to the normal vector of the screen, giving a dot product of 0, thus indicating erroneously that none of the walls/floor/ceiling of the corridor should be drawn at all.

Of course, you are right on the prallel/perpendicular thing. Stupid typo on my part.
As for the perspective issue - you arent understanding what i am saying - I mean the normal of the actual surface as it is displayed by the monitor - after perspective is already done on it. IE, in your long corridor example, those polygons are displayed not perpendicular to the screen, but already perspective corrected. If you use that surface as your basis for the normal, then it works fine.
 
Althornin said:
Of course, you are right on the prallel/perpendicular thing. Stupid typo on my part.
As for the perspective issue - you arent understanding what i am saying - I mean the normal of the actual surface as it is displayed by the monitor - after perspective is already done on it. IE, in your long corridor example, those polygons are displayed not perpendicular to the screen, but already perspective corrected. If you use that surface as your basis for the normal, then it works fine.

Makes me wonder in what kind of space you would compute that normal vector. In the corridor example, the walls would come closer and closer to being perpendicular to the screen towards the far end of the corridor, so your normal vector would have to change across the surface of the wall from the near end to the far end of the corridor. Hmmm - how would you compute such a normal vector?

Also, the entire dot product method fails badly when used on a stretched or sheared texture map - in such a case, you should be doing anisotropic mapping even on a surface parallel to the screen for correct result, but the dot product method would tell you not to.
 
Comparing the shots I see the testure aliasing you guys pointed out, but my personal opinion is that the R8500 looks much better. It looks much more rich and vivid. Almost alive, while the GF one looks dead.

I see the same in Chalnoth's UT images, the showing testure aliasing looks much more alive. Actually its especially aparent on those. Check out the face of the small pillers on the right side. Especially the first one not covered by the gun. In my opinion it looks much better.

Why isnt there a comparison between Nvida and ATI both using 8x aniso? If that is what people say is causing the seemingly better detail on the RADEON shot, maybe it is also causing the aliasing.

And about what implementation being "right", I see no conclusive answer. I prefer the looks of the RADEON shots, in this case(I have neither card to try myself). And seeing that they also have ~5 times less perfomance hit, I would prefer buying a Radeon(its cheaper too).
 
Texture aliasing is much less noticeable in static screenshots than in motion. Therefore, you need to see that in motion to decide whether method is better for you.

Personally I prefer GF3/4's anisotropic because Radeon 8500 have some problems with slope planes. Most games do not have too many slope planes, but flight simulators do.

IMHO the computation of anisotropic filtering will become better in future products. Many things in real-time 3D graphics have similar development. For an example, early 3D chips have no per-pixel mipmap, or wierd mipmap LOD selection. Now per-pixel mipmap is normal, and LOD selection is less wierd :) Some early 3D chips fake trilinear (linear_mipmap_linear) by dithering. Now it is rare. Therefore, I think when we have enough transistor budget and bandwidth, IHVs will implement good anisotropic filtering into their 3D chips.
 
Back
Top