Raytracing would be nice. The answer all depends on the technology available at the time. With sufficient processing power (and this is many, many years away) you could do away with the GPU completely, or simply have a dual CPU system with one processor dedicated to graphics. With sufficient processing power, any effect, any feature, any method of rendering can be emulated by a general purpose processor. It would simply (what an understatement) be a matter of coding drivers to emulate the desired function.
For example, with a suitable driver, any video card should be able to shunt any programmable vertex and pixel shading functions to the CPU. Today this would be insanely slow, due to restrictions on processing power, bus bandwidth, and memory size (not to mention driver developer intelligence
).
Twenty-five years from now... who knows? Perhaps there will be no video cards. A powerful enough CPU simply wouldn't need one. Then again, perhaps developers will choose to stick a GPU format similar to the current one for the purpose of simplifying programming... devlop a high level API that assists in simplifying the programming of some desired effect, instead of coding it manually. Of course, there's no reason why such an API could not be a general "driver plug-in" that breaks down the high level instructions for a general purpose CPU to process.
It's all a matter of who does the programming, and how to streamline the process such that absurd repetitiveness is avoided. The only reason for application specific hardware though, IMO, is lack of processing power.