I didn't have time to wade through several hundred posts, (I despise this forum format for any sort of catch-up on discussions) but my point is simple.
If ATI is "disabling" the bilinear anisotropic or trilinear modes when color mip maps are used, this may not be cheating.
It is PERFECTLY ACCEPTABLE to sample from only one mip-map (the larger one) rather than linearly interpolate between two if trilinear is on, provided enough samples are taken on the larger mipmap, provided the smaller mipmaps are linear downsamples of the larger one, and provided the algorithm is correct.
---------
Scott's Trilinear optimization correctness theorem:
If a lower level mipmap is simply a straight linear down-filter of the higher level map, then all the data required to render a scene can be found in the high detail map, and using multiple mipmaps IS NOT NECESSARY provided the algorithm samples enough texels in the high detail mipmap to retrieve and integrate the information lost by neglecting the lower level mipmap. Additionally, to be correct, such an optimizing filtering algorithm must mimic filtering methods that interpolate between mipmaps and have no discontinuity at mip level boundaries.
---------
If their drivers simply detect when lower level mipmaps are close enough to simple downfiltered maps and enable this optimization, it is not cheating at all. Your visual quality will not be impacted by using only one mip level at a time since all the information in the lower level map is contained in the larger one.
That said, I'm not sure that ATI's specific optimization passes my correctness test or not. But I've always thought that blending between two mipmaps was a pointless excercize if you already HAVE the data from one mipmap contained in another and the texels are cached and ready to be read.
This is in no way some sort of fanboyism, I've been an advocate for this since the S3 Savage3D did it.
The ONLY reason trilinear existed in the first place was to save bandwidth and to save processing power, and sample and filter only 8 texels (4 from each level) instead of 16 from the top level. (Assuming isotropic). If processing power is free or near it (hardware interpolation) and caching means that bandwidth is also almost free for cached data, then such optimizations can make sense.
If ATI is "disabling" the bilinear anisotropic or trilinear modes when color mip maps are used, this may not be cheating.
It is PERFECTLY ACCEPTABLE to sample from only one mip-map (the larger one) rather than linearly interpolate between two if trilinear is on, provided enough samples are taken on the larger mipmap, provided the smaller mipmaps are linear downsamples of the larger one, and provided the algorithm is correct.
---------
Scott's Trilinear optimization correctness theorem:
If a lower level mipmap is simply a straight linear down-filter of the higher level map, then all the data required to render a scene can be found in the high detail map, and using multiple mipmaps IS NOT NECESSARY provided the algorithm samples enough texels in the high detail mipmap to retrieve and integrate the information lost by neglecting the lower level mipmap. Additionally, to be correct, such an optimizing filtering algorithm must mimic filtering methods that interpolate between mipmaps and have no discontinuity at mip level boundaries.
---------
If their drivers simply detect when lower level mipmaps are close enough to simple downfiltered maps and enable this optimization, it is not cheating at all. Your visual quality will not be impacted by using only one mip level at a time since all the information in the lower level map is contained in the larger one.
That said, I'm not sure that ATI's specific optimization passes my correctness test or not. But I've always thought that blending between two mipmaps was a pointless excercize if you already HAVE the data from one mipmap contained in another and the texels are cached and ready to be read.
This is in no way some sort of fanboyism, I've been an advocate for this since the S3 Savage3D did it.
The ONLY reason trilinear existed in the first place was to save bandwidth and to save processing power, and sample and filter only 8 texels (4 from each level) instead of 16 from the top level. (Assuming isotropic). If processing power is free or near it (hardware interpolation) and caching means that bandwidth is also almost free for cached data, then such optimizations can make sense.