Great, this is what we always wanted; aliasing inside textures. Wait, let's simply go back to point filtering!
You'd need a very very large point-filtered texture to have the same effect as SmartFlt. SmartFlt provides you the option of having some areas of the texture be sharper than other. Of course, sharpness is accompanied by aliasing, so it's a double-edge sword. You of course don't apply the same pattern everywhere, but select the pattern based on contrast differences between texels.
Parts of the textures that were ment to have sharp edges still retain sharp edges after filtering, instead of being blurred out. Conerversely, parts of the texture that don't have sharp edges are still bilinear filtered. You don't lose anything.
What might actually work ok in a 'scientific' way it to give every texel a weight.
That doesn't always work. You want the filtering equations to be different depending on which area of the subtexel you are in.
What would be the perfect algorithm for texture filtering?
It doesn't exist. All magnification filtering algorithms are based on the idea that they can construct a higher resolution texture from a lower one, and that this higher resolution texture is what was
meant to be used, but couldn't be due to memory/speed/misc hardware constraints.
The question is then, how good is this reconstruction?
Let's take the example of a 2x2 black and white checkered texture. The same idea can be generalized to any size texture, but I'll use this small texture for demontrative purposes. If you magnify this texture on screen (by, say, applying to a large triangle), then what do you expect to see? Should it be a larger 2x2 checkered pattern? Or maybe, the checkered pattern was really an aliased version of a smooth grey gradient.
No filtering algorithm would be able to discern those two cases. You'd need two different filters, which need to be selected by a human (programmer or artist).
In the more general case, you can apply this idea to any NxM sub-section of any texture. Should this NxM be point-fitered or linear filtered? Or maybe the filtering is a more complicated function?
If you apply the same filter for the whole texture, then some regions would be more blurry than they're supposed to, or some will be more aliased than they need to.
SmartFlt goes part of the way towards solving this: The filtering equation is partially encoded in the texture itself. That way, the right filtering is selected. SmartFlt is somewhat limited, as it tends to only contain some variations of bilinear and point sampling.
Could pixel shaders be used to improve texture quality?
You can code SmartFlt in a pixel shader. It's just going to be horribly slow. Even worse, SmartFlt really kicks in when you
magnify a texture: that is, when a large area of the screen will need to be filtered.
What "flexibility and genericity" are you talking about? Elaborate.
SmartFlt hardware can't really be used for anything but SmartFlt. Dot3 (for example) is much more general: although it can be used for bump-mapping, it can only be used for many other things.