Intel to make bid for nVidia? **Reuters**

Where I was trying to lead that thought was that Intel keeps their IGP small, since no one 'needs' that performance, to minimise cost, and maximise profit. The same could be done for CPUs, they could have stayed with a Pentium classic, kept shrinking the die, ramping the clock speed, adding cache and features until they have a SoC. After all no one 'needs' that performance in the very same sense no one 'needs' a fast IGP.
There is a very large market that needs more CPU performance. It's bigger than the one that needs more GPU performance.
If there were no CPU competition, Intel wouldn't have cycled through so many cores as fast as it has.

In the case of graphics, Intel's focus on the low end was such that neither Nvidia or ATI really tried to go there. It went exactly as far as it knew no one would challenge it, integrated graphics for its value segment.
It remained focused on what it does best, why push it?

Oh and I do realize that there is a lower bound on how little silicon one would want to use, and that by going with a single high-end design can help cut costs. I just think that the way Intel has treated the GPU is kind of silly when you compare their apparent reasoning for it, with the world in which most of their CPUs live.

Most of their CPUs don't go to gamers. The volumes x86 chips have are bigger.

I'll admit I don't know a lot about web serving, but what tasks would a web server be doing that isn't very latency tolerant and doesn't have a lot of concurrent tasks? If you have to send data over the internet isn't that latency going to let you mask the delay from any heavy weight threads run on a CPU like Niagara? I honestly don't know, but it seems like it should.
Not all servers do all their talking on the internet.
A lightweight front-end server can have any number of application and data servers behind it. Niagra is somewhat better in some places than previously thought, but a lot of stuff happens behind what's visible to the web that takes serious work.

I don't think the system that runs Google's indexing service would do too well, considering there's a fair amount of heavy matrix math that goes on after a page has been parsed, and Niagra doesn't do multi-socket.

I'm not trying to say that they are necesarily low-end or bad, but they made trade-offs that hurt serial general purpose computing very badly.
They opted away from heavy cores because they felt the gains weren't worth the costs. It wasn't until recently that was the case.
You're asking "why did Intel increase performance on its cores all those years ago when it was worthwhile to do so?"

And if the future goes one way, CELL will go down fondly in the history books as being revolutionary, if the future goes the other way it will go down in the history books right next to Alpha, Itanium, and many others.
It's not really that revolutionary. Heterogenous computing systems have been done before. They've been done with general purpose cores tied to smaller cores with high dsp or fp performance and local store.

It's just that it's all on one chip, and that's not really anything more than changing a physical location a couple inches.

They will probably become on and the same, but I imagine the name will still indicated what something is going to be good at. But what would still differentiate a chip called GPU from one called CPU is a matter of how the internal data paths are configured, what functional units are emphasised, batch sizes, etc...
What would the GPU do? If it's not being fed a command list from somewhere, what can it do?
It definitely sucks at that, so where's the gain of having 48 shader pipes and the ability to produce commands enough for less than one?
 
Given that I don't write in Chinese, and therefore have no credibility. . .

. . .this deal as a buyout makes less sense to me than the AMD/ATI deal did. At least with that one I always saw why AMD would want to make that deal, tho I did not understand at the time what ATI saw in it.

On this one, I don't see why it would make sense for Intel. As much duplication as there was with ATI for Intel, there is as a percentage of the company (NV that is), even more duplication/overlap with NV, and at twice the price. That is a snotload of money to be spending when there is that much duplication. Edit: This is an NV willingness point too, actually. In retrospect it is clear that part of ATI's willingness to entertain AMD as a suitor was an understanding it wouldn't require a major thwacking of staff/resources due to duplication.

Some kind of alliance or JV maybe, but not seeing the logic for a pure merger/acquistion here.
 
Well, we'll see. . .but unless they've changed their minds from the original announcement, almost all duplication cuts will be aimed at sales/marketing and general office staff types (maybe some Finance people?) rather than the engineering departments.
 
Back
Top