Can someone sum up the ATI filtering thread please?

If you are proud owner of Radeon 9600/X800 you can download utility attached in article above: ixbt-filter.rar. And explore by yourself difference in filtering both in static screenshots and in motion. If you'll need help spotting differences - just press "D" - and program will help you in that

Yes . I'm glad a program can tell me the diffrence. Sorry to say I wont be running this program while i'm playing a game right ? Which means its my eyes that must see the diffrence and I can't. With earlier geforce fx drivers i could howerver and still to a small degree can .

Which is why i really dont' understand this witch hunting .

In motion you can't see any diffrence while playing a card .
 
Clootie said:
Yep, it's the same brilinear, switched on/off by driver per texture. Driver tries to detect if texture mip-levels are different enought and if they are - switch off optimizations. Another thing to note: current drivers (till Cat 4.5) use brilinear only on DirectX MANAGED textures.

In comparision to NVIDIA:
1) linear interpolations between MIP levels takes place in different space (before LOG(...)) => NV do it "by book", ATI saves transistors.
2) ATI filtering always uses MORE data from HIGH detail mipmaps comparing to NVIDIA (both in standart and optimized modes) - so better screenshots, but should have more shimmering in motion.
"Should have"? Either it does or it doesn't. It's possible to take samples from higher detail mipmaps and not shimmer you know.
jvd said:
Right but i've told you there is no diffrence in motion over a 9700pro.
If you are proud owner of Radeon 9600/X800 you can download utility attached in article above: ixbt-filter.rar. And explore by yourself difference in filtering both in static screenshots and in motion. If you'll need help spotting differences - just press "D" - and program will help you in that :LOL:
Note that by default the program multiplies the differences by 64! Pretty extreme. Since it appears that the driver is looking for visual differences (not differences that are visible after multiplying by 64), maybe it should default to 1. with a difference scale of 1, I can see a very faint "ghost" image in the difference mode. For all intents and purposes, the two images are identical to the naked eye.

-FUDie
 
My understanding of texture filtering is rudimentary at best, but shouldn't sampling from the more detailed MIPmap lead to less shimmering and blurriness as the screen resolution increases? Is the shimmering (as) noticable at 12x9/12x10 or 16x12?

I'm looking forward to an English translation of that article, Clootie.
 
Pete said:
My understanding of texture filtering is rudimentary at best, but shouldn't sampling from the more detailed MIPmap lead to less shimmering and blurriness as the screen resolution increases? Is the shimmering (as) noticable at 12x9/12x10 or 16x12?
If you disable mipmapping, i.e. sample from the base texture all the time, then shimmering becomes very apparent. This is due to undersampling. Basically, anytime you undersample, you can be vulnerable to shimmering.

-FUDie
 
FUDie said:
"Should have"? Either it does or it doesn't. It's possible to take samples from higher detail mipmaps and not shimmer you know.
Not without taking more samples.
 
Chalnoth said:
FUDie said:
"Should have"? Either it does or it doesn't. It's possible to take samples from higher detail mipmaps and not shimmer you know.
Not without taking more samples.
It also depends on the data you are sampling. Some data can withstand (some) undersampling, other data can not. Even bilinear filtering can "shimmer" since it's not the perfect reconstruction filter.

-FUDie
 
FUDie said:
It also depends on the data you are sampling. Some data can withstand (some) undersampling, other data can not. Even bilinear filtering can "shimmer" since it's not the perfect reconstruction filter.

-FUDie
Sure, but you can't assume that for a game situation. Your filtering technique should be able to handle any data you throw at it.
 
Chalnoth said:
FUDie said:
It also depends on the data you are sampling. Some data can withstand (some) undersampling, other data can not. Even bilinear filtering can "shimmer" since it's not the perfect reconstruction filter.
Sure, but you can't assume that for a game situation. Your filtering technique should be able to handle any data you throw at it.
Isn't that why ATI is doing texture analysis? Have we seen evidence that aliasing is now worse than it was? So far, all I have seen are static screenshots from a very difficult texture. Yes, there does appear to be some moire, but there was some to begin with anyway. Look at the shots Dave provided, all three cards had aliasing with the provided texture. It's not obvious to me that the R420 result would be any more apparent in motion given that the differences are so small.

-FUDie
 
Thanks, FUDie, but does a resolution of 16x12 begin to remove undersampling as a problem, or will current "ultra high" textures still shimmer? AFAIK, the point is to be at or below a 1:1 [pixel:texel?] ratio, and I'd have to guess that we're approaching that at 16x12. Or maybe just switching from high- to medium-quality textures will solve this?

Sorry for my total lack of filtering knowledge. I'm just tackling RTR (and Nyquist, box filters, etc.) now. =)
 
FUDie said:
Note that by default the program multiplies the differences by 64! Pretty extreme. Since it appears that the driver is looking for visual differences (not differences that are visible after multiplying by 64), maybe it should default to 1.
-FUDie
I wonder if they'll make a benchmark that multiplies the differences by 64... 1 fps faster equals 64 fps faster, cool. :)
 
Here’s a summary regarding ATI’s adaptive-Trilinear …
… Our testing, at least, has indicated that the texture filtering quality of the X800 series was always the same or better than the Radeon 9800 series, which does not use this adaptive technique.And we encourage anyone else who wishes to do their own testing and their own analysis, and see if they can verify the results. …….


That quote is from the TR interview and this page 6 quote sums things up nicely…
TR…

TR: We've recently learned that the X800 series, and in fact the 9600 as well, uses an adaptive algorithm for trilinear filtering. We've also seen that it applies fuller filtering to colored mip maps. Why?

Nalasco: We use an algorithm that performs an analysis on texture maps and mip map levels to determine what type of filtering is necessary to produce the ideal image quality while producing the best performance. The idea is that you don't want to do any more work than is necessary to produce the best quality image. This is the heart of any adaptive algorithm. The amount of filtering that you need to do to achieve that will vary depending on the characteristics of the particular texture that you're working on.
To talk about kind of endpoint cases, if you had a texture where all of your mip levels looked exactly the same, there would be no benefit whatsoever to doing trilinear filtering, and pure bilinear would accomplish the exact same effect.
To go to the other case, if you have a very large difference between adjacent mip map levels, such as when you have colored mip maps, then nothing less than trilinear filtering, where you take eight texel samples for all pixels, will deliver the ideal quality image. Now, most of the textures you see in actual games fall somewhere in between those two extremes, and depending on the characteristic of the texture, you can get the ideal image by using something that lies in between using the full number of samples for each pixel and using something less.

TR: From what I've read over the last few days, there are cases where the algorithm reverts to "legacy trilinear." Two examples are the cases of colored mip maps and dynamically generated textures. Can you offer some examples of where legacy trilinear is used, or is this not a correct reading of the way that the algorithm works?

Nalasco: Unfortunately, we can't give specific details about the way that the algorithm works because it's a proprietary software algorithm and we have to protect our intellectual property. There's currently a patent pending on the technology. However, the general idea is that if we can analyze the texture and determine that we can eliminate the visible boundary between adjacent mip map levels using a given number of samples per pixel, then we will implement a filtering technique which seeks to minimize the number of samples that we need to take to achieve that effect.
What that means is that if you look at textures that have a lot of very fine detail, an example would be a texture that has alternating single-texel bands of color, that's an example of a texture that's not a colored mip map but is very difficult to make look good using only bilinear filtering. Another example is if you see textures that have narrow lines, like lines on a road that extend off into the distance, this is another case where just pure bilinear filtering looks very bad, and if you don't do something much closer to using the full eight samples per pixel, you'll see visible artifacts.
Our algorithm will use an adaptive technique to, based on the characteristics of the texture, use the necessary amount of samples per pixel to deliver the ideal image quality. Our testing, at least, has indicated that the texture filtering quality of the X800 series was always the same or better than the Radeon 9800 series, which does not use this adaptive technique.And we encourage anyone else who wishes to do their own testing and their own analysis, and see if they can verify the results. But we're very confident in the quality level that we're able to achieve with this algorithm. And of course, because we're minimizing the amount of work done to achieve that quality level, we can offer the best performance with trilinear filtering and anisotropic filtering.
 
Pete said:
Thanks, FUDdie, but does a resolution of 16x12 begin to remove undersampling as a problem, or will current "ultra high" textures still shimmer? AFAIK, the point is to be at or below a 1:1 [pixel:texel?] ratio, and I'd have to guess that we're approaching that at 16x12. Or maybe just switching from high- to medium-quality textures will solve this?
Given the same size texture, higher resolution render targets will improve undersampling since the pixel to texel ratio is higher for more pixels. However, things in the distance can still shimmer since you will eventually hit the threshold where the pixel to texel ratio is too low.

-FUDie
 
FUDie said:
Isn't that why ATI is doing texture analysis?
And what, do you think, ATI is actually doing for texture analysis?

If it was simply high-contrast textures, then the test program that ATI sent to Dave would have "brilinear" disabled for the whole thing. No, it appears that right now, ATI's driver doesn't actually analyze the data that makes up the texture at all. It appears that they just decide whether or not to apply optimizations based upon how the texture is called (i.e. whether or not it's a "managed" texture).
 
Chalnoth said:
FUDie said:
Isn't that why ATI is doing texture analysis?
And what, do you think, ATI is actually doing for texture analysis?

If it was simply high-contrast textures, then the test program that ATI sent to Dave would have "brilinear" disabled for the whole thing. No, it appears that right now, ATI's driver doesn't actually analyze the data that makes up the texture at all. It appears that they just decide whether or not to apply optimizations based upon how the texture is called (i.e. whether or not it's a "managed" texture).

And have your Mr. Chalnoth figured this all by yourself :) Amazing how much one can bullshit with 1/2 knowledge

why don't you write a simple test program and figure this out yourself and then come back to comment?

The excuses of not having an 9600 or X800 or unable to write a d3d program don't cut it....if either of these conditions are true refrain from posting anything authoritatively on this topic..please

Have you even looked properly at the images Dave put up...?
 
Chalnoth said:
If it was simply high-contrast textures, then the test program that ATI sent to Dave would have "brilinear" disabled for the whole thing. No, it appears that right now, ATI's driver doesn't actually analyze the data that makes up the texture at all. It appears that they just decide whether or not to apply optimizations based upon how the texture is called (i.e. whether or not it's a "managed" texture).
Yes. I'm sure that's a patent-worthy algorithm right there. :rolleyes:
 
Chalnoth said:
It appears that they just decide whether or not to apply optimizations based upon how the texture is called (i.e. whether or not it's a "managed" texture).

Yeah, right.

Since programs are known to change the texture type when enabling coloured mipmaps... :rolleyes:
 
Well, I suppose they may do it by looking at some min/max brightness algorithm, but then that wouldn't always catch colored MIP maps.

Or, it could simply check to see whether or not the higher MIP levels are a simple box filter of the lower ones, but then that would pretty much only cover colored MIP maps.

I somehow doubt they're doing a Fourier transform of each texture to ensure that they only use it on the ones where aliasing can be avoided, as the texture load time would be pretty darned slow. And even then, the algorithm would be better-used on adjusting LOD, not in applying brilinear, as if the fourier transform components are known, then it should be relatively easy to figure out how much one can shift the LOD before aliasing becomes apparent (this would be really nice, though: have an optimal way to adjust LOD on a per-texture basis, and supply developer tools to accurately calculate the appropriate LOD for each texture).
 
FUDie said:
Clootie said:
2) ATI filtering always uses MORE data from HIGH detail mipmaps comparing to NVIDIA (both in standart and optimized modes) - so better screenshots, but should have more shimmering in motion.
"Should have"? Either it does or it doesn't. ....
I've said "should have" because it's quite subjective, but math is right where, so just check it.

FUDie said:
Note that by default the program multiplies the differences by 64! Pretty extreme. Since it appears that the driver is looking for visual differences (not differences that are visible after multiplying by 64), maybe it should default to 1. with a difference scale of 1, I can see a very faint "ghost" image in the difference mode. For all intents and purposes, the two images are identical to the naked eye.
Multiplier set to 64 just for that - so you could spot differences in a second. But it's NOT fixed you can and should modify it. It's feature - not restriction! Problem is when you move slider to 255 you'll see that in some angles more when 80% of image is different in 9800 and 9600/X800.

thatdude90210 said:
I wonder if they'll make a benchmark that multiplies the differences by 64... 1 fps faster equals 64 fps faster, cool. :)
Ooo, we have a pretty intelligent comment here :LOL:

Chalnoth said:
No, it appears that right now, ATI's driver doesn't actually analyze the data that makes up the texture at all. It appears that they just decide whether or not to apply optimizations based upon how the texture is called (i.e. whether or not it's a "managed" texture).
Nope, ATI driver tries to analize texture and if mipmaps are different enought - it switches off optimization mode for that texture. It's hard to tell what algo they use. But it's "if not MAX(difference)", probably the same as used in The Compressonator?
 
Clootie said:
Nope, ATI driver tries to analize texture and if mipmaps are different enought - it switches off optimization mode for that texture. It's hard to tell what algo they use. But it's "if not MAX(difference)", probably the same as used in The Compressonator?
Well, you couldn't use something like The Compressonator, because MIP maps are of a different size. And a simple difference technique would disable the technique if there's a great difference in colors, and so ATI's program wouldn't have worked.
 
Chalnoth said:
FUDie said:
Isn't that why ATI is doing texture analysis?
And what, do you think, ATI is actually doing for texture analysis?

If it was simply high-contrast textures, then the test program that ATI sent to Dave would have "brilinear" disabled for the whole thing. No, it appears that right now, ATI's driver doesn't actually analyze the data that makes up the texture at all. It appears that they just decide whether or not to apply optimizations based upon how the texture is called (i.e. whether or not it's a "managed" texture).

and do you actually know why? i guess you know, too. YEAH BECAUSEOOF CHEATING WITH COLOUR MIPMAPS!!! HARRHARR.

a simple hint, as you possibly still not got a clue: if it is managed, its known to be box-filtered down as it is the drivers job to do it. that means it knows the optimisation can work good there. if its NOT managed, at any time the user (the application) can change any mipmap level, thus, everytime it does that, the driver would have to do image comparisons over several layers (3), and do filtering, and statistics.

and guess what? this can be costy, espencially if its a texture that gets updated often. guess what? thats why they avoid this comparisons.

end result: if the application says "hey, dx/gl, create the mipmaps for me", the driver knows it can optimise as the driver knows how the mipmap looks.
if the app says "hey, dx/gl, i want to handle it myself", the driver doesn't, and the work to check would be huge. so, it doesn't optimize.

its all rather simple to understand, but as i said. you're too dump to do so. you bether feel 1337 at bashing, instead of learning the difference.

it is not an evil plan ati had, to hide their optimisation/cheat. it's NOT. this is not about opinion, this is a fact. you have to learn and accept that.

i'm not against nvidia or ati. i'm just pointing out the huge difference in what the optimisation actually does. nvidias simply lowers quality on the whole image. ati lowers quality where it knows it doesn't really hurt. or tries to.

and besides that, there is the fact still, that trilinear is not the holy grail, and trilinear of managed textures can be optimized lossless. both are independent on how ati implemented it, but both are true as well.

one day you will get mature, chalnoth. but not in this hw generation, i guess.
 
Back
Top