If you ignore Vista for a moment, then yeah what is required to draw your typical 2D GUI would be pretty insignificant, but at the same time how much of a CPU would really be required to take user inputs and apply them to the word processor's/email client's/web browser's memory?
The key is that a CPU can be made to run a basic software renderer.
A GPU can't be made to take user input and drive the system.
The lower the need for a GPU, the more acceptable it is having the CPU or some basic video hardware to do some rendering.
There is no corresponding increase in the utility of a GPU if the CPU can do less and less.
The cost of low-end CPUs is such that there is little difference in price to most buyers if it's a Celeron or Centaur, besides the fact that a bunch IT departments only buy Intel due to the branding.
We buy newer cores just because Intel's not going to make Pentium Pros on its latest and greatest fabs. It's not worth maintaining that platform when the costs of the units are so low anyway.
In which case neither is replaceable with nothing.
Not exactly, the CPU can struggle along pathetically without a GPU.
The GPU cannot do the same thing.
But my point still stands, if this is all that's required, Intel and AMD should be building and selling far more integrated, much simpler, and much cheaper products instead of these clock speed and IPC behemoths that we have today.
The high end needs more performance, and the high end brings margins. It is cheaper to amortize the development costs of the high end by using it again for the mid and low end, as well as suppressing any upstarts that might try to weasel in from below.
Which makes me think that the only reason we have the CPUs we do today is because high-end products create interest, and interest sells.
We haven't run out of a need for performance growth, though Intel and AMD would slow core introductions, if they could be confident the other would do the same.
The high-cost variants exist as marketing strategies than they do as income sources. It's the less than insane versions that carry well.
What is unfortunate, is that they haven't seemed to have realized that the same can be made true for GPUs and this is was ultimately has been holding back IGPs and will probably cause the death of high end GPUs in the case the Nvidia goes out of business or gets bought.
GPUs already use high-end SKUs to generate interest and create price segregation. If Nvidia and ATI and everyone else magically disappeared, then AMD (let's say the merger didn't happen) or Intel would step in, because the primary obstacle to their entry is the great lead in expertise that Nvidia has on its own turf.
That would be something interesting to look at, how many GPUs are sold to gamers, versus how many CPUs are sold to the Top500 super computers. And then break GPUs down into single chip and multi-chip/board configurations.
The Top500 aren't the only consumers of large numbers of processors. The entire low to mid-end server market is a heavy user of CPUs.
I honestly haven't the foggiest idea how that comparison would truly look, but I think it would be safe to assume that GPUs are at least as profitable as those super computers are.
The top supercomputers often get discounted rates, direct profits aren't the only concern in that market.
You may notice that I'm intentionally ignoring web servers, this is because they are primarily IO and much like my hypothetical office machine listed above, except now you might want to scale to more cores, ala Niagara.
Not every server only has lightweight threads. Niagra's niche is a bit broader than some thought it would be, but not that broad.
Are XBox360 and PS3 examples of the future? I certainly wouldn't consider either of their CPUs particularly high end if compared to a C2D or X2 on current programs.
I don't think you can put CELL in the low-end category, and Xenon isn't entirely that bad.
What's to say that a GPU won't progress to being programmable enough and with virtualized memory be capable of performing low-end CPU tasks. Just like high-end CPUs today are capable of doing all the rendering a GPU is capable of.
There's no reason why not, but is it really a GPU or just a CPU that's good at graphics? If it becomes the central processing unit of the system, the GPU moniker would be even more meaningless than it is now.
But going into the future, what type of workloads is one expecting? Will they be TLP and parallel, or do we still need more increases in clock speed and IPC? Personally I say we need both, but from the appearance of everything we will be getting processors (CPU and GPU) that very much firmly fall into the first type of workloads, and the only reason that the CPU will swallow the GPU is because the CPU companies are bigger.
Neither type is going to stagnate entirely with IPC and clock speed. There will always be a need for single-threaded performance, if only to fully utilize the parallel units in less than ideal conditions.
If CPU companies swallow up the GPU companies, it's in no small part because the market for CPUs is just bigger, and the fact that GPUs need CPUs far more than the other way around.