<strike>That's pretty uncalled-for.</strike>
Edit: I've thought about it some more, and here's what I have to say about this. I've never been comfortable in the GPU beat because it's not really my area. CPUs are my area, and the reason I was able to do a good job with the PS2 is that the Emotion Engine was just a CPU (albeit a weird one).
I don't think anyone questions that you do a good job with CPUs. I certainly enjoy all your CPU articles and find them quite informative, but I don't think you have enough of a background in graphics (or at least, you didn't, but to be fair that was some months ago) to be able to comment meaningfully on raytracing versus rasterization.
As such, you'll be spared my graphics commentary until one or more of the following things happen: GPGPU starts to look interesting again (it won't);
Well, that depends on where you look, doesn't it? GPGPU is still young. AMD just introduced its own SDK, NV either just reached or is about to reach the point where the dev environment is friendly enough for programmers that aren't in the ultra-high-end to look at it, and now we've got GPGPU support (for CTM from AMD and CUDA from NV) in the main driver revisions. Saying that GPGPU is just not going to look interesting in the next few months is probably very silly. All it takes is one killer app to turn GPGPU from "interesting idea, nobody will ever use it" to"everyone wants to buy a G92/RV670 because this makes Photoshop three times as fast" or whatever.
As far as Larrabee goes, do you actually think anyone will care about it as a GPU, at least for the first generation? Let's say it's insanely great, the true heir to R300 because it makes everything else look slow and ugly. It won't matter much. First, the shipping drivers will be buggy. Application developers don't always follow the DX specs, and Intel will have to scramble to put in app-specific workarounds just like NV and AMD have been doing for years. But that's only part of the problem! Then they need developers to actually develop and test using Larrabee (running stable drivers) as well as AMD and NV chips. We saw this with R300, and we'll see it with any serious competitor that appears. The second or third-generation Larrabee is when it could start having a real impact on the graphics market. And realtime ray tracing? By the time it could be in serious competition for use, every card from every IHV will be able to do it and there will be some well-defined portion of Direct3D for it (and the Khronos group will be arguing over some minute portion of it before it can be included in OGL...
)
So then the real question is how well it will do in terms of GPGPU--I don't know. Depends on the language it uses (I assume that will be Ct, but nobody's actually said that, and there's always TBB--sigh, why does Intel introduce twenty different things to solve the same problem, I don't want to have to know OpenMP, TBB, and Ct simultaneously), the quality of the compilers, and the ease of the development environment. I don't think that these are just going to appear fully-developed and wonderful upon the release of the hardware. CUDA and the AMD SCSDK will have had significant time to mature and potentially be much friendlier to developers. I honestly think we'll be talking about Larrabee as a GPGPU chip in terms of what Intel actually provides (say, what the Havok acquisition gets them) instead of what third parties actually write.
So, if I've come across as a jerk in previous postings, sorry--not my real intention. I just think you are missing numerous considerations, both technological and historical, in your analyses of Larrabee (at least as a GPU) and realtime ray tracing.
(insert standard note about "these are only my opinions and not those of anyone else" here. I've tried to avoid public speculation or long posts since I accepted the job, partially for fear of "NV shill" or shit like that cluttering up the forums as well as, but I don't start until June and nothing I've signed nor anyone I will eventually work for have said to not post until then, so I will make an exception for this. but yeah, this is basically "I think Hannibal is radically underestimating the effects of GPGPU over the next few years" more than anything specific. is that enough of a disclaimer? I sure hope so!)