DemoCoder is (I assume) talking about D3DX9 Texture Shader objects (which only recently have become objects, in the current release its just a special D3DX effect type).
This is a special type of HLSL/effect type that allows standard HLSL code to be run into a texture. AFAIK Currently its purely a CPU thing I think but there is no reason that in future they could possible implement it as a GPU accelerated thing.
Why is this handy?
Imagine 2 different cards that process texture read vs ALU operations very differently. Say card A get a texture read for free, whereas card B does the actual ALU operations faster than a texture read (or more accurate etc). You can then write a HLSL function that does the operation and at runtime detect which card is which and either use a texture shader to build a texture with a lookup table or just call the function in the actual shader.
At the moment its totally manual and largely undocumented but the latest BETA is changing things a fair bit in this area.