Larrabee Die-Shot - Analyze this

Of course, it'd also nicely imply that the "our bandwidth efficiency is way better than anybody's else!" claims were, to put it nicely and in terms even a bankrupt banker could understand, mark-to-model fantasy. Or this is yet another R600-like bandwidth overshooting. Heh.


I vote for BW overshoot cause that's what they need to properly introduce it to the market. Even the hellboy intel still doesn't want to speculate how it'll scale and it's first job is to place it on the market as that named intel's guys stated by tgd. They'll should have a launch no matter what. And the only thing is to do it before amd could recuperate with their still developing their sdk.

It's a damn cpu and what makes wondering what hell of socket would that use? Or it will be a slot like which sounds more reasonable if we take 512-bit gddr5 bus. How many pins does just 512b gddr5 memory interface needs? On the other hand some 12 core only could easily end up in lga1366 probably pre-prepared for easy integration quad 64-bit ddr3 needed in "future products" so we could theoreticaly end up with assymetrical DS workstation in a setup where basic cpu could share up some of it's memory bandwidth with our new bred Larabee with next year SandyBridge "classic" CPU counterpart (quad ddr3/4).

As we could all see now, even Westmere is going to have integrated gpu from ex-MCH (northbridge) directly on die which is great move or even leap ahead from still "under construction" AMD Fusion. And all them proclaim extintion of IGP by 2013 so this kind of dual socket setup could work easily on intel ... One part "classic CPU" other part "media terascale" CPU all connected thru long developed QPI which still needs a lot of polishing and every opportunity to polish it in the new CPU design certainly wouldn't be passed. At least not by tidy intel. Tick-tock, tick-tock. Well it's amd's idea isn't it at least they go first with it in public but "Where is Fusion?" end of 2008-Q2 2009-2011???
 
Intel's stated its desire to not use Larrabee I in any socketed platform.

It could always change its mind, but the current position is a somewhat artificial relegation of Larrabee to the PCI-E add-on board space.
 
I wonder, if the compute cores are organized in a quad-like operation related to the texturing hardware. There is eight structures (in the red boxes), presumably each containing a texturing quad (4*8), so that is a ratio of 32:32 (x86:TEX, huh?) or four x86 compute cores per TEX quad!?
 
Okay, so I know there was a PCWatch article that said there will be 32 core and 24 core variants for Larrabee intially all on 45nm process. 24 core would be a 32 core with 8 cores disabled. There would have been a 48 core version later, albeit with a shrink.

If its true, 600mm2 with 32 cores, hmm.
 
I imagine Larrabee would take place in the big socket for nehalem EX and itanium, not the 1366 socket.
then again, even that might be short on memory bandwith (256bit ddr3 if I'm not mistaken).

It's a specialty part anyway. It feels like it's made for political reason, to counter GPGPU and Cell, and assert x86 full spectrum domination.

by just putting it on a PCIe 16x card, they won't restrict it to some high end server ghetto.
 
It's a specialty part anyway. It feels like it's made for political reason, to counter GPGPU and Cell, and assert x86 full spectrum domination.

It is a political move (so to speak). It's Intel's foot in the door in the single-socket highly parallel performance sector heretofore dominated by GPGPU.

by just putting it on a PCIe 16x card, they won't restrict it to some high end server ghetto.

I imagine said "ghetto", being more established than GPGPU is also therefore more profitable.
 
Back
Top