Where the alternative method to rasterizing for Larabee would had been what exactly?
I'm late to the party, but didn't see this thread earlier. In case you're really interested. There are several alternatives and intel has pushed some research. (keep in mind, Larrabee started somewhere in DX9 times, way before there was any access to compute on GPUs, nor anything close to flexible as it).
One alternative is irregular z-buffering, e.g.
http://www.eweek.com/c/a/IT-Infrastructure/Inside-Intel-Larrabee/10/
instead of the usual problems of mapping 'some' depth pixel from the light-view to the eye-view depth, you take the actual depth from the eye-view and use it for rasterization of the shadow-map. it's not much more than custom sample offsets for the rasterization and can be quite effective, especially if you consider what work arounds you usually have to get decent shadows (e.g. higher resolution, multiple cascades, filter, etc.)
Another alternative is point rendering, with a good reconstruction filter, you could use it to get some kind of proper transparency, motionblur, depth of field etc. It's of course not without flaws, but our current way of rendering has also insane flaws that we just got way too used to.
adaptive sampling.. in movies, it's common to render parts of the screen with increasing sampling-resolution until you are under some noise threshold, either controlled by artist, by some statistics or error metrics. In DX11 you have a way to output a coverage mask for your custom multisampling in shader, but you cannot really do e.g. deferred shading based on the noise frequency of the g-buffer. you have to do it explicitly in shader. but even then it's tricky, as you don't have control over the sampling position (not in a way that would spawn threads). if the rasterizer would spawn the needed amount of shading-threads, you'd get for shading what you get for geometry anti aliasing with MSAA.