Colored Bands and "IQ" measurement

RussSchultz

Professional Malcontent
Veteran
It seems that colored bands have become the de facto method of measuring "IQ".

While in many cases it is revealing of information like the transition between bands, the shape of mip map selection, and the degree of anisotropy supported, it seems to have become a measurement in its own. Though it definately gives some hints about "IQ" in general, can it be misleading?

Does a smooth transition in the colored bands necessarily translate into no static mip map boundaries visible in motion?

Can sampling be precise enough that there shouldn't be any noticeable boundaries without blending two mipmaps? (Or is it a mathematical impossibility?) Could the blending area be reduced without impacting the results?

While the ideal picture on the anisotropic measuring app would be concentric circles, does the square-ish shape that one of the NVIDIA modes results in make much difference? Does the star shape that ATI products have make much difference in practice?

Would a high contrast grid or checkerboard make a more interesting and telling test concerning the visible results?


Inquiring minds want people who are knowlegeable about these things to fiercly debate the topic.
 
All good questions, ones that FanATiCs have been asking since the Radeon 8500 first introduced it's new anisotropic mode. Interesting you bring this up now. 8)

The answer is: you just have to run the different settings in different games and make a subjective analysis. That goes for both the mip-map transitions, and the different mip-map boundary "shapes".

Different people will ikely to be more or less sensitive to the differences in approach. Different games (or different scenes within the same game) are likely to highlight one particular weakness over another. I just personally don't think there's any real "objective" way to approach it at this point.

Colored mip-maps are certainly useful for figuring out "how" things work, but indeed are not the final say in actual subjective image quality. It's the same with all synthetic tests.

Synthetics tests tell us "how". But you have to leave "how effective" to real-world situations.

And ironically, it's almost self-defeating. Once you look at the colored mips, you "know what to look for" when put in real-world situations. It can probably be argued that a large amount of time, if you didn't know what to look for, you might not notice any difference between the majority of situations.
 
RussSchultz said:
While in many cases it is revealing of information like the transition between bands, the shape of mip map selection, and the degree of anisotropy supported, it seems to have become a measurement in its own. Though it definately gives some hints about "IQ" in general, can it be misleading?

yes, it can. example: the kyro trilinear implementation where when blending between two LODs n and n+1, the samples for n+1 would be actually box-filtered on-the-fly from LOD n. in this case using the bands test would show no blending across bands, whereas practically kyro would do trilinear just as good by any definition.

Does a smooth transition in the colored bands necessarily translate into no static mip map boundaries visible in motion?

yes. for example when looking at a floor (or any similar flat surface nearly parallel to the view vector), one can always distinguish the different LODs localities on the surface (higher LODs closer to the viewer, blurrier LODs in the distance) but with linear blending between LODs no hard borders between neighbouring LODs occur. this is valid for static images and images in motion alike.

Can sampling be precise enough that there shouldn't be any noticeable boundaries without blending two mipmaps? (Or is it a mathematical impossibility?) Could the blending area be reduced without impacting the results?

it is mathematically impossible, as the LODs "boundary function" is continious hence easilly spottable by the human vision, i.e. you need to introduce some jutter (remember famous 3dfx' dithered trilinear?). As about reducing the blending area - it would generally mean artificially "steepening" the linear function introducing flattening at both ends of the function at the expence of steeper middle slope, so essentially as steepness -> vertical, filtering -> bilinear.

While the ideal picture on the anisotropic measuring app would be concentric circles, does the square-ish shape that one of the NVIDIA modes results in make much difference?

a speculation: imagine an in-game situation where you're looking into a tube which walls are of contrasted dirty/rusty textures, but otherwise having a grid pattern in them (say, metal plates on the inside of the tube). in this case there's a good chance for you to notice an irregular loss of details with the NV's method. just as in samx's test shows irregular bands.

Does the star shape that ATI products have make much difference in practice?

as comparably incorrect as it may be, ATI's method would produce better phycho-optical result with the above example as the error (i.e. observable loss of detail in the particular case) would be of higher frequency (star vs. cube) to which human vision would be less snesitive, i believe.

Would a high contrast grid or checkerboard make a more interesting and telling test concerning the visible results?

yes, using a moire-susceptible pattern under high angles to the viewplane would reveal alot about aniso, IMHO.
 
darkblu said:
RussSchultz said:
While in many cases it is revealing of information like the transition between bands, the shape of mip map selection, and the degree of anisotropy supported, it seems to have become a measurement in its own. Though it definately gives some hints about "IQ" in general, can it be misleading?

yes, it can. example: the kyro trilinear implementation where when blending between two LODs n and n+1, the samples for n+1 would be actually box-filtered on-the-fly from LOD n. in this case using the bands test would show no blending across bands, whereas practically kyro would do trilinear just as good by any definition.
I seem to remember that S3's Savage3d also used this method. This method works correctly only when each mipmap (except the largest one) is exactly a 2x downsampled version of the next larger mipmap - both OpenGL and Direct3d require the renderer to handle correctly the situation where this is not the case. I seem to remember from discussions here at B3D that there are actually cases, involving bump-mapping or texture compression, where you don't want mipmaps to be generated through just downsampling. IIRC, for bumpmapping it was claimed that downsampling a height map and then generate a bumpmap from the height map at each mipmap level gave better results than just generating the bumpmap at the highest resolution and then 2x-downsample the bumpmap directly.
 
Looking at mipmap boundaries doesn't tell the whole story.

The thing to compare is R200 anisotropic with R300 performance anisotropic.
In theory they are the same (on horizontal surfaces at least where I tested it), but in practice the R200 has a LOT more texture aliasing.
 
I think colored mipmap shouldn't be used as a criteria for determining whether chip X has "better" image quality than chip Y. Especially so when "anisotropic filtering" is as abused as it is with ATI's and NVIDIA's various options (they really should not use AF as terms... maybe just "Texture Filtering" would do/be better).

I think they can be useful, in its current form as seen in Serious Sam and Quake3 (I can think of at least one other possible implementation, and hence usefulness, of colored mipmaps) for only the following two reasons and only for comparison purposes (it's useless in a "standalone" review):

1) whether a chip is futzing around with/buggering trilinear
2) how far mipmaps are pushed into the distance with varying degrees of proper AF

But, in summary, yeah, they can be misleading because it introduces the possibility of misinformation with so many hardware review websites around and not everyone of them knows exactly what's going on. It has taken on far too much importance, for the wrong reasons IMO.

BTW, this thread really does not belong in this 3D A & C forum (and I doubt it ever will evolve into one relevant/pertinent to this forum). I'm moving it into the 3DHW&T forum.
 
Simon F said:
Russ,
In my opinion, they could be completely misleading.

The ignorant layman here, would think that it's debatable.

Level 8x Aniso + 8xS (RGMS + OGSS) on NV25:

http://users.otenet.gr/~ailuros/8xSAA.JPG

Sidenote: not that it's actually a playable setting, but in an extremely CPU bound flight sim the difference in texture quality is more than just noticable. Or I don't have a clue what I'm talking about.... :oops:
 
darkblu said:
RussSchultz said:
Would a high contrast grid or checkerboard make a more interesting and telling test concerning the visible results?
yes, using a moire-susceptible pattern under high angles to the viewplane would reveal alot about aniso, IMHO.
Xmas texture Filtering TestApp has various options to produce different outputs than "just" the tunnel we're used too. Unfortunately it is seldom used.

Darkblu, I tried this settings on my GF3Ti200/41.09 and got these results for 1xAF (trilinear) and 8xAF.
I'm just an interested layman, so I'm certainly not qualified to choose interesting or revealing settings. Those are just examples to show that there's more than just a colored tunnel in this app, which I agree tells not the whole story about AF.
 
My problem with using the colour bands for measuring how well AF is working is that people seem to concentrate too much on how far back AF pushes the boundaries.

<Sarcasm on>
Well I've got a great way to win that race, I'll just add a huge texture LOD bias...Job done...Oh...it aliases really badly...never mind...you can't see that in a static shot.
My board wins on texture filtering quality in reviews. Whoopee for me!
devil-smiley-029.gif

<Sarcasm off>

(Actually...If you take a GF4 and play with the Anisotropy, and texture LOD bias in Xmas's tool, you get very similar effects. I'm not saying theres a link, but it shows it's easy to misread)

Where the coloured bands thing is good, is when you want to look at how the filtering level changes with respect to a variable (Polygon angle, quality mode, anisotropy level, etc). Then it's providing good, relative, data.
 
Depends PSarge; if you combine any form of SSAA with Aniso on GF4 (as "unorthodox" oversampling might be considered), a negative LOD offset will get used, without increasing texture aliasing.

If someone wants to cheat, he can pretty much make use of any application out there through tweaks, optimisations or whatever else.

Would you know the difference if I run 3dmark-whatever and submit results while having used a +3.0LOD bias, if I don't supply a screenshot?
 
Well, we've had both the "shouldn't be called anisotropic filtering" and the "colored mip map" discussions at length before.

Short version:

Colored mip map

Well, when we intially discussed the aniso app by Xmas, I pointed out the flaws I found with the way people evaluated it. (Which means, Yes, I think it tends to be misleading).


Shouldn't be called...

I don't think any of the implementations "abuse" the term "anisotropic filtering" at all, just some people's expectations of it. This is assuming the method doesn't actually make things worse than plain bilinear at any time, which AFAIK it doesn't for the techniques being discussed.

I do think that nvidia's non-application methodologies might abuse the term "trilinear filtering", but I don't think it matters a great deal except as it impacts the consumer, and in this case they seem to universally fail to effectively offer the benefits of trilinear filtering (to varying degrees). That is, they do if I recollect correctly someone stating that they saw the mip map boundaries remaining highly evident in motion for "Balanced" aniso, and things don't change for these modes.
 
Ailuros said:
Depends PSarge; if you combine any form of SSAA with Aniso on GF4 (as "unorthodox" oversampling might be considered), a negative LOD offset will get used, without increasing texture aliasing.

That's understandable. When you take more texture samples you push the aliases up the frequency range. So fine...push the LOD to take advantage of it.

If someone wants to cheat, he can pretty much make use of any application out there through tweaks, optimisations or whatever else.

Agreed, but that wasn't my point. All I was saying was that mipmap boundaries at certain locations can mean two (of several) things. 1) Anisotropic filtering is managing to preserve detail. 2) The texture LOD has been pushed. Hence coloured mipmaps by themselves are flawed.

Would you know the difference if I run 3dmark-whatever and submit results while having used a +3.0LOD bias, if I don't supply a screenshot?

Not sure what you're getting at.
 
mr said:
Darkblu, I tried this settings on my GF3Ti200/41.09 and got these results for 1xAF (trilinear) and 8xAF.
I'm just an interested layman, so I'm certainly not qualified to choose interesting or revealing settings. Those are just examples to show that there's more than just a colored tunnel in this app, which I agree tells not the whole story about AF.

actually, mr, you've chosen very revealing seeings - the depicted difference is impressive, as it shows how inadequate trilinear is for slanted surfaces compared to as modest AF as 8th degree. thanks for the shots!

btw, i do know samx's app does more than the tunnel test (as i have it installed here) but i don't have any interesting hardware to perform tests worth discussing.
 
PSarge said:
<Sarcasm on>
Well I've got a great way to win that race, I'll just add a huge texture LOD bias...Job done...Oh...it aliases really badly...never mind...you can't see that in a static shot.
My board wins on texture filtering quality in reviews. Whoopee for me!
devil-smiley-029.gif

<Sarcasm off>

It's funny you should say that, because the review site that many people write off as the least knowledgeable, Tom's Hardware, shows significant moire patterns in the GF FX aniso quality, resulting from pushing the LOD to high.

http://www6.tomshardware.com/graphic/20030306/radeon9800pro-07.html
 
Well, there are two primary pieces to the equation:

1. MIP map level selection.
2. Texture aliasing.

Aside from a few "outliers" such as those mentioned previously in this thread, these two facets can wholely determine the texture filtering quality of a given implementation.

Of course, texture aliasing is the harder of the two to observe, but observations of MIP LOD simply cannot be made without observing the degree of texture aliasing.
 
Maybe a more interesting test than the colored bands would be colored grids/checkerboards? Two birds with one stone!
 
RussSchultz said:
Maybe a more interesting test than the colored bands would be colored grids/checkerboards? Two birds with one stone!
Well, the "tunnel" anisotropic tests already use a checkerboard texture, so aliasing can at least be detected on the base texture. Presumably the amount of aliasing is roughly equal across all textures.

What I'd really like to see is some sort of mathematical description of the amount of aliasing, from a program that takes many different situations into account (different surface angles, different texture coordinate ratios, etc.). It could be interesting to see how texture aliasing could be mathematically quantified.
 
Back
Top