bit comparisons are useless. the architectures don't need to be bit-equal, everyone is free how to bitwise implement something (not only in the filtering unit
).
simple difference images are useful. if you scale them up till you see a difference, you can tell if there is one at all. if you don't scale them up till you see a difference but take the raw difference, you can see if there really is any visual artefact at all.
there is one, huge, visible artefact there. and thats the flame. and thats just because its animated. for the resting image, there is simply a filter running, wich looks equal to trilinear, and the difference is not visible with the eye (espencially if you can't compare to the "real tri" mode.
people try to map this optimisation onto the brilinear escalation on nvidias hw, but there is now about enough proof that it does not behave the same. instead, it works.
and about corner cases: you can't possibly solve all corner cases, this is true. too much pathways. but you can, 100% estimate how good, or bad a value can be (determining the range in wich the value should be, and evaluate if it can be in there.. thats simple statistics, and something done in high-end-renderings, raytracers, global illumination solutions, etc, all the time). ati can implement such an estimator, and when ever it sees an issue, drop out and trilinear. that way, 100% of all corner cases are solved. that is a conservative solution, because in quite some corner cases, we could have the faster filtering and would not see anything.
but it doesn't mather. it guarantees instead one thing: there is no visible difference. all chances where there IS one ARE handled. variance is calculatable.