Not sure how that bears relevance to the potential for LRB scaling?
The range is so large that Intel may choose to focus on a sub-range.
Maybe we're being blind-sided, but the noises coming out of Intel hint that it won't compete with NVidia's $650 GPU - though there's always the multi-GPU approach...
Not forgetting, of course, that IGPs are a rising tide eating into the <$75 market.
Well that's precisely my point. Since all its doing is rasterization it may have a perf/watt, perf/mm^2 disadvantage to older architectures with dedicated hardware.
It seems Intel is turning off individual ALU lanes on
predication, so power saving in Larrabee runs way way deeper than we're used to seeing with any GPU.
It'll be interesting to see which has the highest max framerates, conventional GPUs or Larrabee - it's interesting that 3DMark2001 has re-emerged as a great way to test the maximum power of the latest GPUs (though not quite as good as FurMark it seems).
Rasterisation will cost more energy in Larrabee, but z/stencil-testing, MSAA, blending etc. will cost dramatically less because these operations never go off die (only the final pixels do). So unless AMD and NVidia are about to go with tiled rendering etc., the power involved in thrashing data within an off-die render target is going to vastly outweigh rasterisation, per se.
Though it's worth pointing out that binned geometry does go off-die, so it's not completely loop-less in memory per frame/rendering-pass.
Perhaps, but how many LRB specific rendering pipelines are we gonna get?
Sorry, should have been more explicit in mentioning D3D11, since I'm suggesting that the flexibility and fluidity of Larrabee will give it an advantage on these new types of code written for D3D11, rather than Larrabee-specific code. The hardware is more like a prairie than a patchwork of fields separated by hedges, ditches, gates and grids.
Jawed