Larrabee delayed to 2011 ?

Is this in some way related with the announcement of the 48 core processor?
It very resembles larrabee, maybe they will end up the same project differentiated only by the specs of the single cores
 
Wow, evolution sure seems to be the right way to do things lately... why is that?
Itanium is dying. Cell had to step aside. Larrabee is currently yeat another lots-a-cores experiment. AMD has cancelled an architecture that was "too radical".

I hope AMD and Intel are able to pull off their current CPU roadmaps and the industry is able to move forward. It would really suck if the bottom line turned out to be: "That's it, because everything new starts so far behind that nobody would buy it."

The hope would be that somehow the industry is able to finance the way to 22/16nm (and smaller); by that time the current evolution is dead end and compiler/software guys can come up with something that can make these new things fly? Somewhat depressing...
 
Last edited by a moderator:
In a way the 48 core processor is more revolutionary ... Larrabee was always clinging to the past in a way.

SMP like coherency with a correspondingly inefficient on chip network (the ordering constraints make more efficient switched mesh networks impossible). Traditional caches making divergent memory accesses for the vector pipeline extremely expensive (unlike in say Fermi where I assume the L1 is banked, since it's can be configured as shared memory). No DMA engines for efficient stream processing.

It certainly was never my dream architecture.
 
In a way the 48 core processor is more revolutionary
Indeed. And apart from technical merits - Intel is seeking feedback and testing the waters long before any of this 2nd gen research platform is packaged into an actual product. As if they've learned from Itanium (and now Larrabee) mindset of "from great ideas straight to product".

DailyTech: [Intel] plans to work with several dozen industry and academic research partners around the world next year by manufacturing and sharing 100 or more SCC chips...
 
When I watch the presentation about SCC I could help but think that some rough arbitration between both projects were to happen. It happened sooner than expected but the bright side is that SCC/polaris 2 looks promising.
R.I.P Larrabee
 
The irony here is that AMD's and NVidia's GPUs are not going anywhere stratospheric in terms of game performance in the next 18 months, quite contrary to Kanter's point about how Moore's law applies to GPUs - forward rendering GPUs have pretty much exhausted that line of development since they are desperately dependent upon bandwidth, and the bandwidth fairies are in retirement.

The traditional GPUs are on the cusp of the stage where the graphics-specific functionality should be such a small part of the die (<25%) that generalism dominates. Graphics performance that relies upon compute passes and task parallelism, working smarter not harder, is where we're headed. The forward renderers offer nothing special in that direction.

If Larrabee was HD4890/GTX285 performance, then AMD and NVidia have had a small reprieve. I bet it's much closer than they were expecting.

Jawed
 
Now, do I really believe that Intel will still be still "dead serious" about competing in the high-end discrete graphics card business after sinking so much time, marketing and, of course, money into LRB? I honestly couldn't answer that question since it makes as much sense business-wise for Intel to let go and stop that costly venture, as it makes sense to try and expand onto new markets.


I don't think they'll give up on it this time around. Intel are at the point where they can't keep making CPUs faster and smaller for much longer - all they can do is have loads of cores. To make that useful, you need things to do with all those parallel cores, and graphics and HPC are two obvious areas. One is making lots of money for AMD and Nvidia, and the other is a potentially new market to expand into.

Given the fast pace of the GPU market and huge initial investment, I think it's likely that Intel simply looked at what they had as not good enough for today's market, and shifted their resources to 12-18 months down the line where they will be more effective and competitive.
 
I still don't buy the graciousness a lot of you guys are offering to Intel. Assuming they've scrapped LRB in its current form and are going to reset then 12-18 mths isn't nearly enough time to get something viable to market. That's how long does it takes the big GPU guys to release evolutions of their architectures.
 
The irony here is that AMD's and NVidia's GPUs are not going anywhere stratospheric in terms of game performance in the next 18 months
Intel saved their face either way:
a) currently they can have only 32 cores, and this is too little to beat the competition. By delaying a year, they hope to have more cores using newer process. If AMD and NVidia have hit the wall, Intel would come out on top, being faster and "better".

b) If they can borrow some ideas from their 48-core prototype, then Larrabee 2 would be faster and "better" than competition (Larrabee 1 would have been slower, but "better").
 
I still don't buy the graciousness a lot of you guys are offering to Intel. Assuming they've scrapped LRB in its current form and are going to reset then 12-18 mths isn't nearly enough time to get something viable to market. That's how long does it takes the big GPU guys to release evolutions of their architectures.

You don't think that Intel have been working on Larribee v3 for the last 12-18 months? Like AMD and Nvidia, I bet they've had multiple teams working working on the next few products at the same time for release down a timeline. When the 2009/2010 v2 products became non-viable as retail products, they were effectively scrapped into development platforms whilst Intel shifted to v3 - and no doubt whatever is coming after that.
 
Wow, evolution sure seems to be the right way to do things lately... why is that?
Because it makes sense.
Itanium is dying.
It was dead the day it launched.
Cell had to step aside.

STI's fault really. They designed an uber chip and threw it at programmers without any programming model to speak of.
Larrabee is currently yeat another lots-a-cores experiment.
Many B3D'ers, including me, never saw the point of full hw cache coherency, x86 overhead. It had it's good points mind you. Unification of cache hiearchy, context storage and shared memory is a great idea. With a somewhat more restricted programming model, it could have had (possibly) much better perf/mm. It's TBDR pipeline was crying to be built in the desktop space.
AMD has cancelled an architecture that was "too radical".

Which one was that? Care to elaborate?

I hope AMD and Intel are able to pull off their current CPU roadmaps and the industry is able to move forward. It would really suck if the bottom line turned out to be: "That's it, because everything new starts so far behind that nobody would buy it."
The hope would be that somehow the industry is able to finance the way to 22/16nm (and smaller); by that time the current evolution is dead end and compiler/software guys can come up with something that can make these new things fly? Somewhat depressing...

Dude, this is the golden era of computer architecture. When the dust settles on it some 10 years from now, it will be said that the changes that are being made today in hw and programming models were just as revolutionary as the invention of computers themselves.

Massively parallel hw, extensive use of hw threading to hide latency, on chip sw managed/assisted storage, dedicated ff hw wherever it makes sense, desktops becoming more and more single chip, desktop apps being replaced by webapps/netbooks, massive virtualization, pervasive JIT compilation even for performance critical apps, infact whatever the PC industry/technology looked like, all of it is being thrown away, or atleast rethought inside-out.

And guess what, lots of legacy code is going to be rewritten just to take adavantage of the shift in hw. :LOL: Who could have imagined that in the RISC vs CISC days?:D
 
The irony here is that AMD's and NVidia's GPUs are not going anywhere stratospheric in terms of game performance in the next 18 months, quite contrary to Kanter's point about how Moore's law applies to GPUs - forward rendering GPUs have pretty much exhausted that line of development since they are desperately dependent upon bandwidth, and the bandwidth fairies are in retirement.

Moore's law is about tranny density, not about bandwidth. ;)
If Larrabee was HD4890/GTX285 performance, then AMD and NVidia have had a small reprieve. I bet it's much closer than they were expecting.

Well, if indeed the bw/pin growth is stalled (as seems likely), then next gen gpu's will also migrate towards deferred renderers, if not TBDR's outright and Intel's major advantage will be wiped out as well. The big question is what can amd and nv come up with.
 
Maybe it's my old age and/or ADD kicking in, but I'm missing the bit where this push-back has been acknowledged, purely or in significant part, to be down to h/w cache coherency and/or x86 "overhead". I wonder if there might be some quasi-religious projection going on in this thread ?!
 
Last edited by a moderator:
I still don't buy the graciousness a lot of you guys are offering to Intel. Assuming they've scrapped LRB in its current form and are going to reset then 12-18 mths isn't nearly enough time to get something viable to market. That's how long does it takes the big GPU guys to release evolutions of their architectures.

From what I've read, not only is all "Larrabee" hardware dead and buried, but Intel is even abandoning the in-house project name "Larrabee" itself. What's left from the Larrabee project is the software, which Intel says it will continue to use--as exclusively an x86 multithreaded software development platform. As it dovetails nicely with Intel's ongoing cpu R&D, this makes all the sense in the world--certainly a lot more sense than Larrabee ever made before, imo.

I also think Intel realized that to continue to talk up canned, in-house tFlop benchmarks, and the selected snippets of "real-time ray tracing" that we've all seen, is creating expectations in the public mind that Larrabee was never going to meet. Just like with Rdram, Itanium on the desktop, and Prescott shipping at 5GHz, Intel has hit yet another dead-end brick wall. This is a classic example of what happens when your PR runs amok and has little relationship with the state of your hardware development: expectations are created that cannot be fulfilled.

I don't think it's entirely Intel's fault, though. I can't remember a time when I've read so much hype and over-the-top speculation from the tech journalist community about a piece of vaporware. Larrabee has got to take the cake in that regard, imo.

Last, I surely do not think Larrabee will be "reset" in 18 months...;) That's purely wishful thinking, and it comes mostly from those tech journalists who have been telling us every chance they got in the last two years how "revolutionary" Larrabee was going to be--any day now--as soon as it is released--ASAP, etc. ad infinitum. It seems to me a very insincere and smarmy way to CYA. Hopefully, though, the Larrabee debacle will make these folks a lot more prudent in the future, so that they don't get so excited about PR. When you've got the functioning hardware in your hand, and you've got a firm release date from the manufacturer--well, *that's* the time to get excited.
 
I think there's too much of an investment in rasterization for any IHV to move away from it entirely, so new programming paradigms are unlikely. More flexible and extensible programming models however, are absolutely likely. TBDR doesn't seem like the answer to me, the last GPU that used tiling was Xenos and we all know how much devs love tiling ;)
 
You don't think that Intel have been working on Larribee v3 for the last 12-18 months?

Given that it was little more than a science project, no I do not think they had multiple teams devoted to future iterations for the last 18 months. Why would they dedicate so many resources to an unproven product?
 
Maybe it's my old age and/or ADD kicking in, but I'm missing the bit where this push-back has been acknowledged, purely or in significant part, to be down to h/w cache coherency and/or x86 "overhead".

These two weren't the show-stoppers, I agree, just 2 outright bonkers decisions with no technologically redeeming features.

I wonder if there might be some quasi-religious projection going on in this thread ?!

A bit, yeah..;)
 
I think there's too much of an investment in rasterization for any IHV to move away from it entirely, so new programming paradigms are unlikely. More flexible and extensible programming models however, are absolutely likely.

Yes.

TBDR doesn't seem like the answer to me, the last GPU that used tiling was Xenos and we all know how much devs love tiling ;)

They don't have a choice in this regard. AFAICS, it is the only way forward, though approaches like IMG's may hide the pain, if any.

Sooner rather than later, they'll have to stop drinking the forward rendering kool-aid.
 
Back
Top