Don't mean to derail the graphics discussion too much, but my experience is mainly with software synths. In that case you have samplers (which would be analoguous to textures); also there are generators such as subtractive synthesizers, and effects processors (which are mostly analoguous to shaders, in this case). Then as mentioned above you have wave-table or grain-table synths, which have a set of sampled sound 'packets' which are mathematically operated on to generate the waveforms (kind of like a shader operating on texture data).
Lots of similarities in my mind.
(BS hat on)
As for graphics, I think it will be very difficult to approach the limits of realism and immersion without orders of magnitude increases in use of textures. Let me qualify that by saying that the use of these textures may only be in the 'art' generation stage, while space and bandwidth limitations force the use of compression of these raw data by various means: fractals, procedurals, lossy compression, etc. The 'compressed' versions would have to be used for real time rendering. The challenge along these lines will be to find efficient methods to 'fit' the raw textures with an approximating procedural with enough detail.
Also, I think we may also see more and more textures being based on real-world photographs, and less on art a priori.
Of course, that is a view from outside, since I am not in the graphics industry.
(hat off)
ERK