Well, they were maths processors, but worked on pixel data instead of polygon vertices. Back in the day before shader processors unified, there were different rule-sets for either type of shader processor, like max number of allowed instructions, some differences in instruction sets, registers and such. Pixel shaders were quite primitive back then, largely described as register combiners on steroids.
You didn't have a lot of instruction slots per pixel shader program, and if you did more than a few instructions per pixel your performance crashed and burned anyhow. It's not like these days when high-end GPUs and cutting edge game engines often run dozens to maybe hundreds of instructions per pixel without any visible slowdown at all. With NV2A-era hardware, anything even slightly advanced, chances were you'd have to do multipass rendering which could tank performance, plus 8bit integer precision per color channel meant banding could occur due to precision loss.
It wasn't until floating point color buffers got sufficiently fast enough to become useful that pixel shading could really start to stretch its legs as far as effects go. Modern games do an entire laundry list of special effects over much if not the entire screen, and still achieve no visible banding or other artefacts...