Firms like INTEL/AMD making such a move just to "piss off" NVIDIA sounds silly in the least;
I don't think so. Engineers don't make the decisions, managers do. And if the managers think it would be nice to screw nVidia out of the market, so it happens. It's roughly the same thing as nVidia blocking SLI on other chipsets, so they sell an nForce with every SLI setup. Of course it's mainly about higher margins, but if AMD and Intel can screw nVidia at the same time, the better for them.
Ailuros said:
Last but not least I severely doubt that INTEL started years ago the Larabee project without having any relevant prospects to integrate those cores into their CPUs in the less foreseeable future. In the meantime I wouldn't be surprised either if they use Imagination IP for their first steps.
Larrabee is not a GPU built from scratch using another company's know-how. It's more like a many-core CPU with vector computing units, some simple texturing-related logic and display output logic. So I'm not sure whether they used Imagination's technologies for that, I always thought they're gonna use it for IGPs only.
As to the question of integrating a Larrabee into the CPU, that sure as hell will happen, but we're not talking Fusion or Havendale here, we're talking CPUs similar to what Larrabee is. Oh did I mention Larrabee's using P54-based x86 cores?
Ailuros said:
If OpenCL makes a startup from firms that mostly deal with the PDA/mobile space, why wouldn't it make sense for the lowest end PC space too? As I said above it might even be that Intel uses IMG IP for their first moves and in such a case SGX should be close to ideal for anything GPGPU.
Using GPUs for general computing would make a lot of sense. But it won't happen until there's a unified API. Right now, if you want to program something and want it to run on a GPU, you need to write a separate code for CUDA, Stream SDK and Toshiba SpursEngine. Plus a separate CPU path. If everybody agreed on an API (be it OpenCL, CUDA or whatever), you'd only need to write things twice (GPU path and CPU path). And the future is only having to write it once and have the compiler/runtime layer decide what will run on the CPU and on the GPU. The latter (runtime layer) would be especially great as it could take into account how powerful your CPU and GPU are and their actual load.