That's an interesting retrospective, thanks for posting.
But I think he's kind of missing the point here. When people ask him why Larrabee failed they're not discrediting its utility in HPC; a lot of them know full well that it went on to become Xeon Phi. What they're asking is why it never materialized as a viable graphics part. As far as pretty much anyone is concerned that's what Larrabee was supposed to be, not whatever Intel's internal project guidelines or roadmaps dictated, and Intel made a lot of claims about why Larrabee was the right design for graphics. So why wasn't it?
This is a legitimate and, in my opinion, good question. He does address this somewhat but I find it a little lacking. Suggesting that they were never really serious about graphics doesn't fit with the extensive software effort that had to have been underway, and their public outreach which consisted of not just empty marketing claims but a lot of detailed academic description. The fact that they got by with little graphics-specific fixed function hardware is moot if they weren't ever competitive. So is the defense that the card ran DX11 titles all on its own or that he played some particular game satisfactorily or that it had some impressive compute or niche applications. If it wasn't within at least striking distance of nVidia and AMD in perf, perf/W, and perf/$ across a major selection of games then none of this mattered. And I suspect that after all Intel invested in this end they wouldn't have pulled the plug if it showed a clear path to success in this capacity.
The closest explanation we get is that maybe it could have used more texture samplers, or it was held back by hardware bugs (that Intel would have had a good grasp of timeline for fixing) or that it could have seen huge advances with more software development (based on what, and with how many more years?). There doesn't seem to be much consideration that it's even possible that the whole concept was unsuited for competitive consumer graphics. It may well be that a sea of x86 cores with rather conventional 512-bit SIMD and texture samplers bolted on was enough to deliver great gaming performance but I'm not yet convinced. Since Larrabee some pure-compute GPU renderers have been developed but they've shown some pretty big gaps in performance in areas that aren't attributable to texturing. And this is without being saddled with an architecture that's more CPU than graphics friendly.
Oh, and that argument that they must not have cared that much about graphics because they could have done a big discrete Gen-based GPU seems out of touch. Everyone else knows that Intel's GPU designs were garbage in 2005 and even with a scaled up area/power budget they would have been terrible, or at the very least far from great. I'm not even totally sure they could pull that off now. But we're talking about some huge changes that came with a lot of time and investment (and I'm sure many new hires). Ironically, some of their big improvements came from moving to more fixed-function hardware.