Trilinear Filtering Comparison(R420 vs R360 vs NV40 vs NV38)

One thing needs to be remembered. This is an optimization that is done on driver level - not on HW level therefor any competitor can duplicate it. Of course ATI won't publicly say every optimization they have in their drivers. Same goes for competitors - they won't say about theirs either to prevent competitors to gain speed from their intelligence.

But I think this should have been handled differently. Not sure if it's possible but provide a filtering option like suggested but without explaining what it does really (to protect IP). That most likely would have required some serious encryption of the drivers..

I haven't decided yet what to make of this - cheat or good intelligent algorithm (leaning towards latter after reading more about it). Funny thing though is that ATI's image quality has been said to be better than competitors' execpt for AF in every (not 100% sure of every but close anyway) driver release after Cat 3.4. So the adaptive bi/trilinear filtering do produce good filtering quality, which is equal to trilinear. Otherwise people would have complained about this a lot. Right?
 
paju said:
One thing needs to be remembered. This is an optimization that is done on driver level - not on HW level therefor any competitor can duplicate it. Of course ATI won't publicly say every optimization they have in their drivers. Same goes for competitors - they won't say about theirs either to prevent competitors to gain speed from their intelligence.

not true. you need hardware support like the 9600 and x800 have, its not possible on a 9700/9800 for example
 
Whats done at the driver level is the flagging of textures which they believe can be optimised - I'd assume that there is no reason why NV couldn't implement this part as well. They are, however, being very cagey about is exactly whats happening at the hardware level.
 
DaveBaumann said:
Whats done at the driver level is the flagging of textures which they believe can be optimised - I'd assume that there is no reason why NV couldn't implement this part as well. They are, however, being very cagey about is exactly whats happening at the hardware level.

Maybe they didn't quite work out the S3 patents implications?
 
DaveBaumann said:
Whats done at the driver level is the flagging of textures which they believe can be optimised - I'd assume that there is no reason why NV couldn't implement this part as well. They are, however, being very cagey about is exactly whats happening at the hardware level.

ATI hints that the agressiveness (what I take to be how much sampling to do in the higher mipmap level) is also adaptive. I'm wondering if the aggressiveness is determined at texture upload time or at the display/fill stage.

If nvidia wants to implement the per texture flagging at the driver level, wouldn't they also need hardware support to read the flag and perform the appropriate bri/trilinear filtering?
 
Chalnoth said:
I'm still not exactly sure what you're saying. But I don't think that it would be benficial to drop to anything below basic bilinear filtering (4 texels linearly-interpolated).

Are you saying that with anisotropic, it would be beneficial to take fewer bilinear samples from the lower-detail MIP map, and more samples from the higher-detail MIP map?

The easy way to think about it is to assume that the higher mipmap level has been generated by bilinear filtering. As that information is already available at the higher mipmap, there is no need to bilinear filter again at the lower mipmap. If you need an intermediate level of filtering between the higher and the lower levels, just interpolate between the two texels :LOL:
 
Back
Top