Well I guess that large data sets greatly affect cache usefulness so bandwidth becomes more important.As CPUs tend to crawl through large (parallel) data sets much slower than GPUs, it isn't such a constraint for pure CPUs as it is for GPUs (or APUs).
There is also the capability of the GPU to hide latencies while working on many requests at a time.
What I don't get is why CPU would not benefit form the extra bandwidth?
For the number of pending request handled in parallel I guess GPU behaviours can be mimic through software.
Another thing I'm not sure I get is "pure CPU", single core single thread are pure CPUs I guess
Multi cores are also pure CPU I would guess, why would many cores not be consider "pure CPU"?
CPU are just to enter the many cores era, the extend of what they can pro-efficiently do is going to widen, I would guess that bandwidth requirement are going up.
Not what I meant, the technology wrt memory used by CPU and GPU are to converge soon.Because for certain workloads you need that bandwidth. If you scale CPU bandwidth up, your FLOPs/watt number is going drop. If you want to see how it is disingenuous, imagine Nvidia released a TITAN GPU with only a single 64 bit GDDR5 memory channel clocked very low (iow modern CPU bandwidth). The FLOPs/watt of such a chip would be very good on paper, but I doubt they would have customers lining up for such a device.
Indeed, the power GPUs spend per GB/sec of bandwidth will be significantly reduced making comparisons more accurate.
So it is not relevant to compare CPU (not too mention multi cores 2/4 big cores) with GPU looking back at things like DDR3 or GDDR5.
And the power will be the same for the CPU and the GPU.
Now I do not believe that one size fit it all. The nice thing is I could see somebody like Intel shipping multiple products with +8 cores for different workloads based on different cores but supporting the same functionality (same ISA) and able to run the same code.
Now if the point is to compare Ivy bridge to modern GPU, it is a bit moot as imo CPU are to enter a new era.
Again I don't think that using only big cores is the way to go but on the plus is not like Intel (or IBM or competitor on the ARM side) can only design one architecture.
I could see Intel dealing with 3 type of cores for a while:
the big cores (SB, IB, Haswell, etc.)
the middle of the road type of core (the Atom line)
The throughput type of cores (xeon phi)
Last edited by a moderator: