On what basis would LRB fall short on texture filtering quality?
I wasn't saying they will/would, only that the potential is there for Intel to fall short in regards to performance and/or IQ. The GPU market in general bears out how difficult it is to attain good performance with high quality (or better: good balance) of IQ in a specific threshold of die area, power, heat, etc. If AMD (ATI) and NV can struggle in these areas after a decade+ experience I have reservations in assuming Larabee will be competitive in performance as well as be roughly comparable in regards to AA and AF quality. Not only are there issues of experience, iteration, and refinements & efficiencies, Intel's past efforts although half hearted sounded smart and had some good ideas but in the end couldn't compete.
I will be shocked if Larabee #1 at a specific die size can be within 10% performance in DX10/11 compared to AMD/NV with similar IQ. The reason I would be shocked is because of the difficulty proven GPU makers have shown in getting these things right and the lack of an iterative process to refine and fix issues makes me doubtful Intel will be able to address all the issues with competitive performance with their first offering.
What I expect from Larabee, personally, is some bottlenecks Intel is currently downplaying which hurt mainstream GPU performance, but it also opening up some new techniques for those with the time and money to invest in such as well as an investment in new approaches to computing. Having 32+ x86 cores with large vector units and gobs of bandwidth should allow for some neat experiments. But will it be neck-and-neck with NV/AMD in regards to DX10 performance out of the box? Getting it "right" in the performance segment of the GPU market seems to be a pretty tall task that many, including Intel, have failed at.
I dunno, maybe you think it will and I will be shocked
I would love to be wrong of course: huge array of CPUs capable of leading class GPU performance? Who wouldn't want that?