Larrabee delayed to 2011 ?

Maybe it's my old age and/or ADD kicking in, but I'm missing the bit where this push-back has been acknowledged, purely or in significant part, to be down to h/w cache coherency and/or x86 "overhead". I wonder if there might be some quasi-religious projection going on in this thread ?!
They are not going to tell us ... some people who like the architecture will say it's because of the delays, some people who don't like it will say because it didn't perform up to snuff.

As I said, I wonder if the developer kits will come with the rendering stack so we can judge the performance per mm2 proper ...
 
Given that it was little more than a science project, no I do not think they had multiple teams devoted to future iterations for the last 18 months. Why would they dedicate so many resources to an unproven product?


Because that's what's required if you really want to enter the graphics market, and Intel are one of the few companies that can afford it. They have to find something to do with all those cores they are churning out, and more cores are the only direction they can go in order to increase performance.
 
Other than the clockspeed/thermal wall consumer CPUs hit in 2003/2004, it seems like every other wall gets broken down before it becomes an issue. I guess time will tell but if history is our guide I think bandwidth will continue to scale as it always has. It may take some new development, but remember that DDR and GDDR were once new technologies as well.
 
Because that's what's required if you really want to enter the graphics market, and Intel are one of the few companies that can afford it. They have to find something to do with all those cores they are churning out, and more cores are the only direction they can go in order to increase performance.

Having lots of money isn't a good enough reason. How can they work on the evolution of an architecture if the baseline is still half-baked? We don't even know how committed they were to it in the first place - sounds like there was a considerable difference of opinion about the whole thing internally. But it is refreshing to see such abundant optimism after the doom and gloom in every other post in the Fermi thread :)
 
Other than the clockspeed/thermal wall consumer CPUs hit in 2003/2004, it seems like every other wall gets broken down before it becomes an issue. I guess time will tell but if history is our guide I think bandwidth will continue to scale as it always has.

5Gbps is out on market, 7GBps/pin (afaik, the fastest part out yet) is already pushing the limits of copper, which are about 10GBps. Light peak with similar bandwidth is optical, and afaik, so is 10GbE.

Only the walls that exist in our minds can be broken down/scaled. Rest have to be respected.
 
I think there's too much of an investment in rasterization for any IHV to move away from it entirely, so new programming paradigms are unlikely. More flexible and extensible programming models however, are absolutely likely. TBDR doesn't seem like the answer to me, the last GPU that used tiling was Xenos and we all know how much devs love tiling ;)
just like everything else, TBDR is a trade-off - you give up something somewhere to gain something else, elsewhere. but TBDR is not a more flexible scanconverter per se (not versus an IMR, that is).

as re xenos (which, apropos, is not a TBDR, at least not in hw), the fact that its mountain-top-touted 'free fsaa' was designed for SD resolutions, while devs were expected to deliver at HD surely helped that gpu gain sympathy. /off-topic sarcasm

does anybody else feel like there's a certain amount of fear among intel's high echelon from non-x86 tech? i mean, LRB fell (partially) victim to intel's stubbornness with x86 (yes, i do believe that), similarly with atom (relatively-poor vertical vocational mobility, but at least running windows is a valid goal). maybe because each time intel have tried to break away with something new, they have stumbled and eventually failed (the shot-by-friendly-fire 960 notwithstanding). i'm starting to wonder, could intel ever produce a viable (as in market-viable), non-x86-derived and/or x86-bolted isa?
 
Having lots of money isn't a good enough reason. How can they work on the evolution of an architecture if the baseline is still half-baked? We don't even know how committed they were to it in the first place - sounds like there was a considerable difference of opinion about the whole thing internally. But it is refreshing to see such abundant optimism after the doom and gloom in every other post in the Fermi thread :)

LRB is a matter of survival of Intel's monopoly in cpu space. As gpu's merge with cpu's, Intel needs to have a high throughput part. The only way to sell it in volume is make it rock at graphics.

AMD with it's fusion and nv with it's gpgpu push will eat intel's high volumes and margins from both ends. Winning at 3D is going to be critical for any company that wishes to dominate in cpu's as well.
 
just like everything else, TBDR is a trade-off - you give up something somewhere to gain something else, elsewhere. but TBDR is not a more flexible scanconverter per se (not versus an IMR, that is).

How so? And if it is just as flexible a method as IMR, how is that a problem?

i'm starting to wonder, could intel ever produce a viable (as in market-viable), non-x86-derived and/or x86-bolted isa?

Well, so far their record is spectacularly craptacular on this front.
 
Having lots of money isn't a good enough reason. How can they work on the evolution of an architecture if the baseline is still half-baked? We don't even know how committed they were to it in the first place - sounds like there was a considerable difference of opinion about the whole thing internally. But it is refreshing to see such abundant optimism after the doom and gloom in every other post in the Fermi thread :)


Intel, like AMD are chasing the full platform. They want all that revenue. They are not going to let the graphics cash get away from them so easily.

And who says v3 has to be an evolution of v2? There's been many instances where a new generation of CPU or GPU has come from a different internal team and been markedly different from it's predecessor. You don't need a baseline if by definition what you are doing is new.

As for the optimism, in general Intel has been executing pretty well in the last few years, whereas Nvidia has not. It's no surprise people expect Intel to bring something to the graphics party as GPUs and CPUs merge.
 
And who says v3 has to be an evolution of v2? There's been many instances where a new generation of CPU or GPU has come from a different internal team and been markedly different from it's predecessor. You don't need a baseline if by definition what you are doing is new.

Agree completely. So I guess we need to agree on what we consider "Larrabee" to be. If it's referring to a specific chip or architecture then it was killed/cancelled. If it represents the idea of a discrete Intel GPU then it can never die.
 
How so? And if it is just as flexible a method as IMR, how is that a problem?
i never said it was a problem ; ) i was just commenting on ShaidarHaran's mentioning it in the context of 'we need something more flexible'. TBDR was not designed to provide higher flexibility, but to exploit a particular trait of the rasterization process.

Well, so far their record is spectacularly craptacular on this front.
it shows, eh?
 
Just to clarify: I didn't mean TBDRs are more flexible than IMRs. Just saying that the traditional rasterizer is here to stay.
 
Agree completely. So I guess we need to agree on what we consider "Larrabee" to be. If it's referring to a specific chip or architecture then it was killed/cancelled. If it represents the idea of a discrete Intel GPU then it can never die.

I don't really think it matters a whole lot what we think...;) Intel coined the nomenclature "Larrabee", and Intel has pronounced Larrabee stillborn--if we can even say that much, since Larrabee never made it too far beyond a vague concept. A "discreet Intel gpu" could be anything, and as the next discrete Intel gpu won't be Intel's first discreet gpu, calling something like that "Larrabee" seems fairly bizarre. If Intel ever does get back into the discrete gpu business I'm sure they'll coin another name for the project--and it won't be "Larrabee". I'm thinking that right now Intel would like to distance itself from the name "Larrabee" entirely. I mean, the very fact that Intel waited for the weekend to announce Larrabee's demise indicates to me that the company isn't interested in a lot of further publicity on the subject.
 
If it represents the idea of a discrete Intel GPU then it can never die.

But it's not alive unless it's something that Intel is pursuing, no matter what the internal code name is. Intel wasn't pursuing this until a couple of years back, and now they are. Just because they've reorganised their focus on the thing that looks like it's the best bet, I don't think we can discount them or claim that they are out of the Larribee/Fusion idea.

As I keep saying, Intel want that graphics/platform/HPC money. They are committed to multiple cores, and they need to find things to do with those cores. Graphics will be one of the major uses.
 
If they are abandoning Larabee for graphics, what does this mean for the companies they've bought to showcase it?
I.e. they bought a company in Lund, Sweden to work on rasterisation techniques that takes advantage of the architecture. How will that affect them? (Of course, some of it could possibly be used with the SSC, but still)
 
Last edited by a moderator:
If they are abandoning Larabee for graphics, what does this mean for the companies they've bought to showcase it?
I.e. they bought a company in Lund, Sweden to work on rasterisation techniques that takes advantage of the architecture. How will that affect them? (Of course, some of it could possibly be used with the SSC, but still)

I doubt it'll affect them at all - Intel's goal still hasn't changed. Intel still wishes to become a major player in the GPU market, while leveraging an X86-like design as much as they possibly can. Regardless of when this finally mannifests, it will still require software, so these companies are still in the clear I'd say.
 
Are they abandoning Larrabee version 1 (32 cores version ?) or Larrabee architecture and will go to something more traditional like NV or AMD type GPU ?
 
These two weren't the show-stoppers, I agree, just 2 outright bonkers decisions with no technologically redeeming features.

They're not bonkers decisions, they're very sensible decisions if they can pull them off without a meaningful negative impact on performance, however you might want to measure that (FLOPS per programmer dollar might be an interesting metric).
 
Back
Top