From the point of view of a programmer, a CPU is very 'flat'. It offers a set of lineair functions, performed in sequence. While a GPU is highly pipelined by design. It is more like a few higly specialized DSP's, embedded in between some generic fixed-function electronics.
But that is only at the outside! Inside, from the POV of the chip-designer, a CPU and a GPU have more in common than they differ from one another. And they keep growing closer.
While a lot of functions like a FPU, a MMU, DMA and others have been incorporated into the CPU, the CPU itself has been broken down into a pipeline composed of highly specialized fixed-function units. And they all use multiple pipelines nowadays.
So, what are the big differences?
Well, for starters, the datatypes used are quite different, although (generalized) a GPU uses the same ones like some of the sub-units of a CPU (FPU, multi-media extensions). And while a CPU gets more specialized units, the GPU evolves towards a 'flatter', more general purpose CPU.
We all know a CPU can be used as a GPU. But is the opposite possible as well? And would the introduction of a general vector unit, preferably as a macro to incorporate into a processor design create a hybrid? Or do we need more for that?
They use large (and cheap) PS2 clusters as 'poor-man' vector processing supercomputers nowadays...
EDIT: It would (I think) at the same time allow GPU's to cut down significantly on the amount of transistors used, as they could benefit from the building blocks used to design CPU's, thereby abandoning the ASIC approach. (Edit2: which would *GREATLY* increase the clockspeed of the GPU!)
But that is only at the outside! Inside, from the POV of the chip-designer, a CPU and a GPU have more in common than they differ from one another. And they keep growing closer.
While a lot of functions like a FPU, a MMU, DMA and others have been incorporated into the CPU, the CPU itself has been broken down into a pipeline composed of highly specialized fixed-function units. And they all use multiple pipelines nowadays.
So, what are the big differences?
Well, for starters, the datatypes used are quite different, although (generalized) a GPU uses the same ones like some of the sub-units of a CPU (FPU, multi-media extensions). And while a CPU gets more specialized units, the GPU evolves towards a 'flatter', more general purpose CPU.
We all know a CPU can be used as a GPU. But is the opposite possible as well? And would the introduction of a general vector unit, preferably as a macro to incorporate into a processor design create a hybrid? Or do we need more for that?
They use large (and cheap) PS2 clusters as 'poor-man' vector processing supercomputers nowadays...
EDIT: It would (I think) at the same time allow GPU's to cut down significantly on the amount of transistors used, as they could benefit from the building blocks used to design CPU's, thereby abandoning the ASIC approach. (Edit2: which would *GREATLY* increase the clockspeed of the GPU!)