ATi is ch**t**g in Filtering

Quasar said:
DrawPim already mentioned that one, but came to a different conclusion as to whether the content to be filtered is influencing the behaviour of texture filtering applied.
DrawPrim said:
Mips should be similar; however they could differ enough that they would want to detect the differences and adjust the filtering aggressiveness. I've already said that the MS conformance tests will use wildly different mips levels (vertical bars, horizontal bars, etc) to detect filtering problems. My guess is that if you created mips levels that aren't colored but are just very different, you'd see the same behavior.

As far as I read that, he's saying what I'm saying.

There are several ways mip-map levels can be provided - for example the graphics board can make them automatically, the application can generate them at load time from the base texture, or the developer can go to the trouble of providing each individual mip level already, and not generate them on the fly from the base texture.

It appears to me that what DrawPrim is saying is that if you create an app that supplies the mip levels, but instead of providing normal ones (that would just be lower detail versions of the base texture), provide ones with very different information then the output you see would be similar to what you see when coloured mips are enabled. If you supplied normal mips from the base texture, or let the graphics board / application generate them, then you will probably see the same results as you do when you are looking at in game images.
 
So, in the end, the method that ATI is using can be correct since it adjusts the aggressiveness of the filtering depending of the "gradient" of the mipmaps.

Is Nvidia able to use a similar approach to improve performance in its drivers or is set up in hardware, hence unmodifiable?
 
Martillo1 said:
So, in the end, the method that ATI is using can be correct since it adjusts the aggressiveness of the filtering depending of the "gradient" of the mipmaps.

Is Nvidia able to use a similar approach to improve performance in its drivers or is set up in hardware, hence unmodifiable?

As long as ATI *say* that is what they are doing then fine. If they say 'this is performance with trilinear', and it isn't trilinear, then that is at the very least misleading.


I'm sure Nvidia can/could do the same.
 
I don't have the patience to read all 17 pages so it'd be good if a nice person can summarize the conclusive findings about ATI's filtering.
 
- R420-filter seems to differ from what R3X0 does by default.
- R420 uses AFAIR same filter as R3X0, if using colored mipmaps.
- filter used in case of colored mipmaps is widely accepted as trilinear.
- nobody "wants" to say, which (noncolored) screenshot shows better quality. (R3X0 vs. R420)
- It must be an optimized filter, if you think logical.
- We don't know, how that optimized filter realy works.
- We are waiting for official technical response from ATi
 
Martillo1 said:
So, in the end, the method that ATI is using can be correct since it adjusts the aggressiveness of the filtering depending of the "gradient" of the mipmaps.
No, that's not true. The texture filtering equations are invariant to texture data.
 
Borsti said:
fallguy said:
hovz said:
ati and nvidia both cheat, its pretty simple. i just hope ati will stop here, and not go any further

You're assuming it is a cheat.

I have yet to see any screen shots from games proving there is a cheat.

Why is that?

The problem is that you can´t show lowered filtering quality by using colored mipmaps - because the guess is that this mode is beeing detected and served with higher quality.

Of course we can discuss if it´s OK when it´s not vissible. But it´s worth finding out what the driver does. That´s what is going on at the moment.

Lars - THG

That makes more sense. So its only visable in motion? Can someone capture it in motion?

And I dont think its ok, even if you cant tell the difference in a still shot, but you can in motion.
 
crushinator said:
- R420-filter seems to differ from what R3X0 does by default.
- R420 uses AFAIR same filter as R3X0, if using colored mipmaps.
- filter used in case of colored mipmaps is widely accepted as trilinear.
- nobody "wants" to say, which (noncolored) screenshot shows better quality. (R3X0 vs. R420)
- It must be an optimized filter, if you think logical.
- We don't know, how that optimized filter realy works.
- We are waiting for official technical response from ATi
Thank you, that makes it a lot easier to understand. :)

So, now that I got caught up on my yardwork and got me PC working again should I go and find out what ATi's official word is? :|

edited bits: I just wanted to get a link to me post to include in me e-mail to some ATi folks and this is the easiest way to do it. :p
 
malficar said:
Does brilinear cause that 'crawling' you get on horizontal textures(Most noticable) in a game when you are running? It's hard to accurately describe. It's as if some are focused and some are not, or some suddenly come into focus.

If you are talking about something occurring in "lines" in the texture ("mip map transitions"):

"Naive" bilinear prominently causes that issue of "crawling" above a certain level of detail and contrast level for the textures (a level of detail that is very common nowadays, and contrast level that is pretty likely in a wide variety of situations).

"Naive" trilinear works to solve that problem directly by working to do extra blending. It pretty much removes the issue completely.

Theoretically, you could, as an alternative, do "tricks" that might not be "naive" to try and solve the problem with "naive" bilinear, but without actually doing "naive" trilinear. "Fast trilinear" can be thought of as such a "trick", that (AFAIK, though extensive testing hasn't been established yet) is successful enough to be a (faster) substitute for trilinear in the general case.

"Brilinear" is a name termed for an intermediate step between bilinear and trilinear...better than bilinear, but not as complete as trilinear. Whether it will have the "line crawl" that bilinear has depends on how "naive" it is and where it falls in between them...

If done in a "naive" way, and with the position in between them that seems to be done in order to get performance benefits, the success of "brilinear" tends to fall significantly short of "naive" trilinear, and so far hasn't had a good track record in established observation of being sufficiently close to trilinear in effectiveness. Therefore, a "naive" brilinear is quite ill-suited to silently replace trilinear, and this is why the burden is on ATI for an explanation to show that the indication of brilinear (proven) is not "naive" (unknown).
A "naive" brilinear appears to be what was identified for nVidia in the past over a period of time (well, for the FX series...the 6800U's "brilinear" hasn't yet been evaluated to see if it is different, AFAIK), where the deficiency to trilinear has been shown to be evident in "moving" video and even to still have noticable issues in still screenshots which aren't good at showing the issue. We don't have either for the current issue at the moment.

However, what seems to be the case is that the 1) X800 brilinear is not quite "naive", and that RV360 (9600) brilinear, also assumed to be "non naive", has gone unnoticed as being lacking in comparison to trilinear by many gamers. This doesn't mean there still isn't a significant problem with silently replacing trilinear with it...the lack of a problem with that is something that would have to be proven. It just indicates a possibility that the "non naive" aspects of it put it closer to "naive" trilinear than past experience with "naive" brilinear.

This is why there is still a lot of questions, and what the article's analysis didn't happen to address completely, though the article calling it a cheat based on past experience is based on a quite logical set of assumptions.
 
demalion said:
However, what seems to be the case is that the 1) X800 brilinear is not quite "naive", and that RV360 (9600) brilinear, also assumed to be "non naive", has gone unnoticed as being lacking in comparison to trilinear by many gamers. This doesn't mean there still isn't a significant problem with silently replacing trilinear with it...the lack of a problem with that is something that would have to be proven. It just indicates a possibility that the "non naive" aspects of it put it closer to "naive" trilinear than past experience with "naive" brilinear.

:idea:
 
Borsti said:
NV40 Norm/Colored (with TrilOpt On)
16x12: 108,9 / 108,7


NV40 Norm/Colored (with TrilOpt Off)
16x12: 82,6 / 82,4


X800 XT Norm/Colored
16x12: 96,3 / 84,8

The Computerbase story started with the COD benches. Now we got the note from ID that there should be no difference in Q3 (so I assume this is also the same for COD). But regarding the additional comments from JC there seems to be much more going on...

Lars - THG

So basically when both cards are doing full trilinear at 8x AF they have about the same performance in CoD, and Nvidia's brilinear is a bit faster than ATI's. Interesting. :)
 
Chalnoth said:
Martillo1 said:
So, in the end, the method that ATI is using can be correct since it adjusts the aggressiveness of the filtering depending of the "gradient" of the mipmaps.
No, that's not true. The texture filtering equations are invariant to texture data.

I think you're right with regard to "trilinear texture filtering" and what has been shown about the behavior, but not necessarily to "image quality for mip map transitions" which is likely what Martillo1 had in mind. Trilinear is quite often done for "image quality", so that can be a criteria for correctness in those cases. However, it seems like there would be cases where invariance might be important.

For example, if this "adaptation" is done to pre-generated mip map levels (I simply don't know how the mip map levels in the established cases where there are differences were auto-generated or not, except for mip map levels, so this is still a question to me), it seems like it could be "incorrect" simply due to there being apparent usefulness to the developer in invariance with regard to trilinear filtering for special applications of custom mip maps. Regardless of image quality impact, this case would seem then be "incorrect", and the colored mip map level behavior is actually encouraging indication for that behavior. It would be important to test this, but not just for evaluating if it depended upon a high degree of changes between mip map levels...I'd think any pre-generated/uploaded mip map sets should have invariant trilinear filtering to allow developers this control. I.e., I think image quality equivalence probably shouldn't matter for this case, and absolute invariance from "trilinear filtering" is important to allows this type of usage.

One interesting question, however, is whether trilinear filtering is defined for invariance, for OpenGL and Direct3D, for auto-generated mip map levels? It makes sense to me that it would not be, as the cases where trilinear would be for other than image quality (where equivalent image quality could still be "correct" even without invariance) would seem to go perfectly with the developer controlling the mip map level data specifically. I.e., I don't see how absolute invariance would be important to the developer when they use auto-generating mip map levels, whereas if they are using them, achieving a performance threshold for their game, with sufficient image quality, would be (as it typically is for game developers). Also, if it is, uploading mip map levels would seem to still allow (if the implementation is invariant when handling these, as it seems it should be) the developer such control when they do want it. If the API allows this distinction, both possible priorities for a developer are addressed under developer control...win/win.

All this, of course, hinging on whether the image quality is actually determined to be reasonable be called "equivalent" so that this latter case isn't still a "cheat" for the performance priority.
 
Yes, Dave and I are saying the same thing.

Personally I think this is all blown out of scale, ATI was given the IQ crown in nearly every benchmark I read. Obviously their "adaptive filtering" methods are working out fine. If there was some blatant IQ problem with some game then I could see the reason for all this, but that doesn't seem to be where this thread spawned from.
 
Okay . I don't have some of these fancy testing programs

But i got a few people together . We had an x800pro , 9700pro , 9600xt 5800ultra
We had unreal 2k3 , 2k4 , cod , quake 3 all running , highest quality and highest aniso we could run it at .

No one for the life of us could see a diffrence between the radeons. My little sister commented on the fx though. She said it looked blurrier .I couldn't see a diffrence though.
 
jvd said:
Okay . I don't have some of these fancy testing programs

But i got a few people together . We had an x800pro , 9700pro , 9600xt 5800ultra
We had unreal 2k3 , 2k4 , cod , quake 3 all running , highest quality and highest aniso we could run it at .

No one for the life of us could see a diffrence between the radeons. My little sister commented on the fx though. She said it looked blurrier .I couldn't see a diffrence though.

Did you run it on the same monitor :?:
 
Back
Top