DX7 texture combination system had upper limit of 4 layers that you could combine with various types of operations (add/mul/dot product/etc, constant multipliers/adds). DX8 pixel shaders have eight instruction slots (but only 4 texture slots). DX8 system is more flexible (two operations for each texture sample), but still it's very limited. DX7 already introduced support for cube maps, for dependant texture reads (EMBM) and for DOT3 texture combinator. With those features, and the 4 layer texture combination system, you can do many of the same things as you can with the DX8 shaders. 10 years ago my DX7 engine supported both diffuse and specular normal mapping (multipass mix of EMBM and DOT3 texture layer tricks). Good looking per pixel normal mapping is doable in pure DX7 fixed function pipeline. With proper tricks you can also do blur kernels and have nice looking bloom effects. Some PS2 games even had these. Calling the DX7 style fixed function texture combiners as programmable shaders is more a marketing trick than anything else. DX7 is capable of many things, but calling it programmable is a stretch. The Conduit could be easily made with DX7 hardware.The TEVs are programmable. They're not as flexible as DX9+ shaders but I've seen many developer statements claiming they're comparable to DX8.1 pixel shaders. There's no way that a game like The Conduit could be made otherwise.
I personally woudn't call DX8 fully programmable either. DX8 shaders had really limited instruction set, only 8 instruction slots and severe limits how you can use the texture data you have sampled. Both internal calculations and the data input/output are in fixed point format (10 bits if I remember correctly, range from -4 to +4) limiting the usability even more. DX9 shaders can be up to 65536 instructions long, can fully utilize texture sampling results in all calculations and can both input/output and calculate results in floating point formats. It's a huge difference.