Various thoughts about previous post.
First about the "all AMD hypothesis" (related To some Joker's comments)
I guess that it's tied to AMD financial situation.
AMD fares pretty well with its actual GPUs. For their own sake their CPU would better improve.
It's clear that AMD would be willing to signed a contract even if its profitability is low. My point is that they depending on their overall situation (financial + how competitive theirs products are) they could be slightly more greedy. Thus at least for cpu IBM may end competitive.
In regard to the performances of such a cpu in comparison to say a "cell2", I don't think it would end close to 25%. I mean given time and this well utilized an actual Cell on some demanding workload could give a run for their money to actual CPU with at least 4 times cell transistors budget.
But I do get Joker454 point as such a CPU is likely to be include in a GPU heavy design (in regard to silicon budget) where some numbers crushing tasks could be offload to the GPU.
And in this case for a not that good CPU IBM may be more interesting cost wize.
I see nothing wrong with a launch price of 399$.It's clear that manufacturers will subsidize less (if any) than in the past generation but that also mean that they might willing tp drop the price quicker. Even while subsidizing this gen Ms should have reach affordable price point earlier than they will. The Rrod costed them +1Billion$, on ~20 millions units it's ~50$ per system of "unexpected " costs.
In regard to the larrabee PRO/CON. (related to some Joshua's comments)
If I understand properly, have only one type of ressources will helps the learning curves.
Coders in charges of optimization will have to deal with only one architecture. In larrabee case the scalar part will be well known, thus the focus will be on the SIMD units.
It's new but Intel claims to have done it right consulting bunch of people and coming with something "compilator friendly".
Larrabee would also benefit from the same advantages first unified GPus enjoyed, it will easier to balance your different workloads on a homogenous type of ressources.
These factors are likely to help the few dev houses that will try to go with custom solutions and the teams in charges to provide tools akin to provide a software layer close enough to directX11 to not hinder multiplateform games development.
(Related to some Aaron's comments and somewhat a response to Joshua too) One other advantage is that if Intel make it in the console market (meaning their solution is good enough thus relying on an "if") it will be conforted as an important actor of the GPU space. As Aaron pointed out (multiple times I would dare to say) programming model related to actual GPU or cell for example are pretty much dead end.
While Intel could be reaching high wolumes though IGP/gpu/consoles/GPGPU spaces ones could notice that while larrabee requires some extra efforts the gain are likely to last. That's a huge advantage of the Larrabee, it's the first "graphical" chip based on a ISA likely to be perene (X86+SIMD instructions set). I mean further iteration of the chip are likely to run a given soft better without any work for the developpers.
If the larrabee performs properly (It doesn't need to be the best), I feel that its main strenght overtime will be that it provide a "perene" environment to differents kind of developpers. And given Intel overall weight I expect it to make that clear to everyone / try to force it the same way they did for the SSE instruction for example.
Nao as I agree with your response I've nothing relevant to add.
Anyway thanks for your answer