There is a very large market that needs more CPU performance. It's bigger than the one that needs more GPU performance.Where I was trying to lead that thought was that Intel keeps their IGP small, since no one 'needs' that performance, to minimise cost, and maximise profit. The same could be done for CPUs, they could have stayed with a Pentium classic, kept shrinking the die, ramping the clock speed, adding cache and features until they have a SoC. After all no one 'needs' that performance in the very same sense no one 'needs' a fast IGP.
If there were no CPU competition, Intel wouldn't have cycled through so many cores as fast as it has.
In the case of graphics, Intel's focus on the low end was such that neither Nvidia or ATI really tried to go there. It went exactly as far as it knew no one would challenge it, integrated graphics for its value segment.
It remained focused on what it does best, why push it?
Oh and I do realize that there is a lower bound on how little silicon one would want to use, and that by going with a single high-end design can help cut costs. I just think that the way Intel has treated the GPU is kind of silly when you compare their apparent reasoning for it, with the world in which most of their CPUs live.
Most of their CPUs don't go to gamers. The volumes x86 chips have are bigger.
Not all servers do all their talking on the internet.I'll admit I don't know a lot about web serving, but what tasks would a web server be doing that isn't very latency tolerant and doesn't have a lot of concurrent tasks? If you have to send data over the internet isn't that latency going to let you mask the delay from any heavy weight threads run on a CPU like Niagara? I honestly don't know, but it seems like it should.
A lightweight front-end server can have any number of application and data servers behind it. Niagra is somewhat better in some places than previously thought, but a lot of stuff happens behind what's visible to the web that takes serious work.
I don't think the system that runs Google's indexing service would do too well, considering there's a fair amount of heavy matrix math that goes on after a page has been parsed, and Niagra doesn't do multi-socket.
They opted away from heavy cores because they felt the gains weren't worth the costs. It wasn't until recently that was the case.I'm not trying to say that they are necesarily low-end or bad, but they made trade-offs that hurt serial general purpose computing very badly.
You're asking "why did Intel increase performance on its cores all those years ago when it was worthwhile to do so?"
It's not really that revolutionary. Heterogenous computing systems have been done before. They've been done with general purpose cores tied to smaller cores with high dsp or fp performance and local store.And if the future goes one way, CELL will go down fondly in the history books as being revolutionary, if the future goes the other way it will go down in the history books right next to Alpha, Itanium, and many others.
It's just that it's all on one chip, and that's not really anything more than changing a physical location a couple inches.
What would the GPU do? If it's not being fed a command list from somewhere, what can it do?They will probably become on and the same, but I imagine the name will still indicated what something is going to be good at. But what would still differentiate a chip called GPU from one called CPU is a matter of how the internal data paths are configured, what functional units are emphasised, batch sizes, etc...
It definitely sucks at that, so where's the gain of having 48 shader pipes and the ability to produce commands enough for less than one?