Hey Hannibal,
First, sorry for the delay, you just activated the anti-spam system. It's really effective, just a tad overly aggressive on new posters!
You'll only have this problem under certain circumstances (including links, but not limited to that) for your first posts, and then it'll go away. We tend to be pretty quick to allow posts, anyway!
And I agree with you, perhaps in the future I should just link to my pieces too instead of trying to summarize, because after rereading some of the things I said on the subject I definitely summarized too much!
What I meant (and that's what I said in this
news piece) is that the *first* iteration of Larrabee will not be aimed at the gaming market at all, as far as I can see.
I tend to speak of Larrabee as the "chip", but you're probably right that it is the name of the core, and as such refers to the whole range of chips that will be manufactured based on the Larrabee architecture. And I agree that Intel is probably hoping that this includes gaming down the road; in which case, the fixed-function units aimed at the GPGPU market would presumably be replaced by traditional GPU units. I'd presume that there would be a delay of 9-12 months between the GPGPU iteration and the GPU iteration, however, but I could be wrong.
The big question there in my mind, however, is whether Intel wants to implement rasterization as a fixed-function process, or if they are still (blindly, imo) hoping for raytracing to become a viable opportunity. We'll see. For their own sake, I hope that they are not naive enough to believe raytracing will be a viable *replacement* for rasterization in the 2010 timeframe. As an added bonus for some effects, it could be quite interesting by then though, I think.
Also, another very important factor is the perf/mm² of Larrabee for gaming tasks, if it wants to enter that market. In the GPGPU space, your margins are going to be ridiculously high anyway, so who cares if the chip costs you $100 or $300 to manufacture? Perf/watt will matter, but perf/mm² won't. In the gaming market, on the other hand, you cannot hope to compete with a less efficient architecture (unless you got fab space to burn, but that's not a viable long-term strategy!)
If a Larrabee derivative for gaming can get within 35-40% of the perf/transistor of G100/R800 for gaming tasks, then that could get very interesting, because of Intel's fab advantage and their willingness to burn some cash to make the project successful initially. If they can't get that efficient, then it is largely irrelevant and both NVIDIA and AMD will have wasted hundreds of millions of dollars trying to compete with a threat that doesn't exist. Heh.
One major advantage that Larrabee has on its side is that Paul Otellini recently said that he wanted to move more products to "node n" rather than "node n+1" in the future. This is a tidbit I didn't see anyone mention, but I think it's very significant, because it does imply that discrete graphics chips would certainly always be on the cutting edge node, rather than the previous one.