Intel pushing Larrebee console deal with Microsoft

I'm sorry if this has already been posted in one of the other Larrabee threads but as far as I could see it hasn't -- except for earlier similar speculation on Beyond3D forum members' behalf :)

Via Engadget, Joystiq and TweakTown I landed at this article in The Inquirer suggesting Intel is having a hard time getting developers to work on Larrabee, and lobbying with Microsoft for inclusion in the successor of Xbox360, so as to pave the way for the PC.
 
Microsoft has burned itself pretty badly with the first Xbox, and the Wii's incredible success is a strong point on how hardware is secondary in the console business. There's just no reason for them to spend money on buying a better CPU/GPU.

This will be a very hard deal for Intel to pull off. They'd have to sell the chips without a profit or something like that to make it.
 
Intel would have to sell (loan) the IP to Microsoft. I don't know that they would be willing to do that.
 
Intel may be willing to do so to get industry marketshare.
 
Ms have no reason to own/loan larrabee.
Larrabee is likely to be produce by INtel (due to process used).

And between this has been posted in the "predict the next gen etc." topic.
 
They'll run into the same problem they had with Intel the first time around. Intel will want to make a profit and have no incentive to drive down costs.
 
They'll run into the same problem they had with Intel the first time around. Intel will want to make a profit and have no incentive to drive down costs.
This time things would be quite different, Intel has a vested interest in pushing LRB, while 10 years ago they didn't really care about pushing their Celerons as they were already established in the market.
 
I have some things in mind that MS could see very beneficial against competition by adopting a powerful Intel product
 
As you implied it's the same untested, dodgy and revolutionary field of origin and application. The LRB can go either way just like the Cell in the PS3, and it hasn't turned out so great for the latter.

To put it simple; will it be powerful enough in the end to warrant the "change"? Cell wasn't.
 
If you think that CELL and LRB have a lot in common perhaps you shouldn't comment on either chip.
 
P.S. If it turns out Larrabee sucks at DirectX...then forget I said anything;)

That would be the big unknown. I may be cynical, but I have concerns that Intel will fall short in both DX performance as well as IQ with their first board. Who wants to endure another generation with poor texture filtering and no AF?
 
That would be the big unknown. I may be cynical, but I have concerns that Intel will fall short in both DX performance as well as IQ with their first board. Who wants to endure another generation with poor texture filtering and no AF?
On what basis would LRB fall short on texture filtering quality?
 
Isn't LRB supposed to be compliant with the next DX version? If that's the case then LRB should be quite amazing for a CPU when it comes to graphics.
 
This time things would be quite different, Intel has a vested interest in pushing LRB, while 10 years ago they didn't really care about pushing their Celerons as they were already established in the market.

I'm sure they'll talk sweet and all, but 3 years after launch they'll be busting Microsoft's balls again. Especially if we're talking about a part coming off their more advanced lines. MS is gonna be looking for shrink after shrink to drive their BOM down and Intel isn't going to want to waste their best fabs on a product with a razor thin profit margin.
 
On what basis would LRB fall short on texture filtering quality?

I wasn't saying they will/would, only that the potential is there for Intel to fall short in regards to performance and/or IQ. The GPU market in general bears out how difficult it is to attain good performance with high quality (or better: good balance) of IQ in a specific threshold of die area, power, heat, etc. If AMD (ATI) and NV can struggle in these areas after a decade+ experience I have reservations in assuming Larabee will be competitive in performance as well as be roughly comparable in regards to AA and AF quality. Not only are there issues of experience, iteration, and refinements & efficiencies, Intel's past efforts although half hearted sounded smart and had some good ideas but in the end couldn't compete.

I will be shocked if Larabee #1 at a specific die size can be within 10% performance in DX10/11 compared to AMD/NV with similar IQ. The reason I would be shocked is because of the difficulty proven GPU makers have shown in getting these things right and the lack of an iterative process to refine and fix issues makes me doubtful Intel will be able to address all the issues with competitive performance with their first offering.

What I expect from Larabee, personally, is some bottlenecks Intel is currently downplaying which hurt mainstream GPU performance, but it also opening up some new techniques for those with the time and money to invest in such as well as an investment in new approaches to computing. Having 32+ x86 cores with large vector units and gobs of bandwidth should allow for some neat experiments. But will it be neck-and-neck with NV/AMD in regards to DX10 performance out of the box? Getting it "right" in the performance segment of the GPU market seems to be a pretty tall task that many, including Intel, have failed at.

I dunno, maybe you think it will and I will be shocked ;) I would love to be wrong of course: huge array of CPUs capable of leading class GPU performance? Who wouldn't want that?
 
I'm sure they'll talk sweet and all, but 3 years after launch they'll be busting Microsoft's balls again. Especially if we're talking about a part coming off their more advanced lines. MS is gonna be looking for shrink after shrink to drive their BOM down and Intel isn't going to want to waste their best fabs on a product with a razor thin profit margin.

Well in "theory" Intel could give MS a pretty sweet deal if their main concern is trend setting and making a footprint with Larabee. They could migrate the Larabee vector units to their traditional CPUs potentially and begin integrating Larabee variants into CPU/Chipset products. This could also pave the way for Intel based GPUs (and "other" products) into the consumer space. This could be their way of buying into not only the console market, but also into the GPU etc. market. If Intel played it smart they could use a Larabee2 design for the Xbox 3 and a PC card.

But probably most importantly for Intel they would be taking the wind out of AMD's and NV's sails. Not only would Intel be cutting them out of some potential sales (and production), Intel would have a competitive product in the market for GPGPU and HPC and would have an edge in using x86 cores.

At some point consumers aren't going to care how fast their browsers open and how fast Word documents respond as there won't be much of a difference between a $500 CPU and a $50 CPU (some would argue that day has long past even). As the performance dollars shift to extreme media processing requirements Intel really needs a product to compete. By pushing a product into this segment with x86--and especially if they can make it high profile--they would position themselves as the dominant player. Intel has a lot to lose so it could be in their best interest to sweetheart MS a deal that helps justify the investment into Larabee, give it exposure, and move the technology down channel and into markets Intel currently isn't active in. Intel could think of Larabee as a "loss leader" if they can get it into the Xbox 3. Of course the question is "will they?"

And from MS's perspective: what does MS have to gain? Intel has Havok, but Havok should be available anyhow to developers even without an Intel chip (if not: AGEIA or another upstart, in house tools, etc). Intel will have an edge in process node and costs, but how much of that will be needed to offset performance disparity? Intel will probably be the safest bet for the most aggressive process node reductions. And it appears a number of developers like the idea of CPU and GPU systems being able to easily share code (how much rewriting will be needed to shift x86 "CPU" code to a Larabee core (in order, slower frequency, different SIMD set) to get good efficency though?)

We will find out soon how desparate Intel really is.
 
Back
Top