The problem with using different schemes for filtering 2 samples in 1D is that you screw up gradients. Imagine a ramp from white to black over the span of 5 texels. A graph of intensities would yeild a wavy profile, and visually it would look ripply and almost quasi-banded.
I was fooling around with a way to improve interpolation for lookup table functions, and found that with some clever preprocessing you could get some fairly flexible quadratic interpolations between texels using one multiply in existing hardware.
For example, suppose your two original texels were A=0.1 and B=0.6. Depending on how you assign values to the texels in two different textures (denoted by A1, B1 and A2, B2), you can get different curves between after multiplying the bilinearly filtered values.
- A1=B1=1.0, A2=0.1 and B2=0.6. ==> ordinary linear interpolation (midpoint = 0.35).
- A1=1.0, B1=0.6, A2=0.1, B2=1.0 ==> concave down (midpoint = 0.52)
- A1=0.316, B2=0.775, A2=0.316, B2=0.775 ==> concave up (midpoint = 0.297)
Things get even more interesting if you allow negative numbers, numbers larger than one, a third texture for a bias, etc. The preprocessing is the tricky part, though. Very tricky.
EDIT: Back to the main topic: vember, this divide-by-alpha technique is actually quite powerful with fewer blending limitations than you think. I thought of this in December when inspired by nAo's discussions of an alternate HDR format. I wrote a lengthy math-laden reply for this thread a week or two ago, but a brownout restarted my computer and I was too pissed to rewrite it.
I'll write it up again soon. Basically, additive and multiplicative blending can be done mathematically correct (though sometimes quite susceptible to precision artifacts after a couple layers), and LERP or an arbitrary linear combination can be done correctly for certain cases. There are a few other tricks also.