Larrabee delayed to 2011 ?

Discussion in 'Architecture and Products' started by rpg.314, Sep 22, 2009.

  1. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,392
    Likes Received:
    720
    If it had a decent performance per mm2 why couldn't a cutdown version at 32nm be a decent IGP?
     
  2. fehu

    Veteran Regular

    Joined:
    Nov 15, 2006
    Messages:
    2,027
    Likes Received:
    973
    Location:
    Somewhere over the ocean
    Is this in some way related with the announcement of the 48 core processor?
    It very resembles larrabee, maybe they will end up the same project differentiated only by the specs of the single cores
     
  3. Lux_

    Newcomer

    Joined:
    Sep 22, 2005
    Messages:
    206
    Likes Received:
    1
    Wow, evolution sure seems to be the right way to do things lately... why is that?
    Itanium is dying. Cell had to step aside. Larrabee is currently yeat another lots-a-cores experiment. AMD has cancelled an architecture that was "too radical".

    I hope AMD and Intel are able to pull off their current CPU roadmaps and the industry is able to move forward. It would really suck if the bottom line turned out to be: "That's it, because everything new starts so far behind that nobody would buy it."

    The hope would be that somehow the industry is able to finance the way to 22/16nm (and smaller); by that time the current evolution is dead end and compiler/software guys can come up with something that can make these new things fly? Somewhat depressing...
     
    #223 Lux_, Dec 5, 2009
    Last edited by a moderator: Dec 5, 2009
  4. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,392
    Likes Received:
    720
    In a way the 48 core processor is more revolutionary ... Larrabee was always clinging to the past in a way.

    SMP like coherency with a correspondingly inefficient on chip network (the ordering constraints make more efficient switched mesh networks impossible). Traditional caches making divergent memory accesses for the vector pipeline extremely expensive (unlike in say Fermi where I assume the L1 is banked, since it's can be configured as shared memory). No DMA engines for efficient stream processing.

    It certainly was never my dream architecture.
     
  5. Lux_

    Newcomer

    Joined:
    Sep 22, 2005
    Messages:
    206
    Likes Received:
    1
    Indeed. And apart from technical merits - Intel is seeking feedback and testing the waters long before any of this 2nd gen research platform is packaged into an actual product. As if they've learned from Itanium (and now Larrabee) mindset of "from great ideas straight to product".

    DailyTech: [Intel] plans to work with several dozen industry and academic research partners around the world next year by manufacturing and sharing 100 or more SCC chips...
     
  6. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
  7. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    194
    Location:
    Stateless
    When I watch the presentation about SCC I could help but think that some rough arbitration between both projects were to happen. It happened sooner than expected but the bright side is that SCC/polaris 2 looks promising.
    R.I.P Larrabee
     
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,493
    Likes Received:
    1,853
    Location:
    London
    The irony here is that AMD's and NVidia's GPUs are not going anywhere stratospheric in terms of game performance in the next 18 months, quite contrary to Kanter's point about how Moore's law applies to GPUs - forward rendering GPUs have pretty much exhausted that line of development since they are desperately dependent upon bandwidth, and the bandwidth fairies are in retirement.

    The traditional GPUs are on the cusp of the stage where the graphics-specific functionality should be such a small part of the die (<25%) that generalism dominates. Graphics performance that relies upon compute passes and task parallelism, working smarter not harder, is where we're headed. The forward renderers offer nothing special in that direction.

    If Larrabee was HD4890/GTX285 performance, then AMD and NVidia have had a small reprieve. I bet it's much closer than they were expecting.

    Jawed
     
  9. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83

    I don't think they'll give up on it this time around. Intel are at the point where they can't keep making CPUs faster and smaller for much longer - all they can do is have loads of cores. To make that useful, you need things to do with all those parallel cores, and graphics and HPC are two obvious areas. One is making lots of money for AMD and Nvidia, and the other is a potentially new market to expand into.

    Given the fast pace of the GPU market and huge initial investment, I think it's likely that Intel simply looked at what they had as not good enough for today's market, and shifted their resources to 12-18 months down the line where they will be more effective and competitive.
     
  10. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,696
    Likes Received:
    2,627
    Location:
    New York
    I still don't buy the graciousness a lot of you guys are offering to Intel. Assuming they've scrapped LRB in its current form and are going to reset then 12-18 mths isn't nearly enough time to get something viable to market. That's how long does it takes the big GPU guys to release evolutions of their architectures.
     
  11. Lux_

    Newcomer

    Joined:
    Sep 22, 2005
    Messages:
    206
    Likes Received:
    1
    Intel saved their face either way:
    a) currently they can have only 32 cores, and this is too little to beat the competition. By delaying a year, they hope to have more cores using newer process. If AMD and NVidia have hit the wall, Intel would come out on top, being faster and "better".

    b) If they can borrow some ideas from their 48-core prototype, then Larrabee 2 would be faster and "better" than competition (Larrabee 1 would have been slower, but "better").
     
  12. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    You don't think that Intel have been working on Larribee v3 for the last 12-18 months? Like AMD and Nvidia, I bet they've had multiple teams working working on the next few products at the same time for release down a timeline. When the 2009/2010 v2 products became non-viable as retail products, they were effectively scrapped into development platforms whilst Intel shifted to v3 - and no doubt whatever is coming after that.
     
  13. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Because it makes sense.
    It was dead the day it launched.
    STI's fault really. They designed an uber chip and threw it at programmers without any programming model to speak of.
    Many B3D'ers, including me, never saw the point of full hw cache coherency, x86 overhead. It had it's good points mind you. Unification of cache hiearchy, context storage and shared memory is a great idea. With a somewhat more restricted programming model, it could have had (possibly) much better perf/mm. It's TBDR pipeline was crying to be built in the desktop space.
    Which one was that? Care to elaborate?

    Dude, this is the golden era of computer architecture. When the dust settles on it some 10 years from now, it will be said that the changes that are being made today in hw and programming models were just as revolutionary as the invention of computers themselves.

    Massively parallel hw, extensive use of hw threading to hide latency, on chip sw managed/assisted storage, dedicated ff hw wherever it makes sense, desktops becoming more and more single chip, desktop apps being replaced by webapps/netbooks, massive virtualization, pervasive JIT compilation even for performance critical apps, infact whatever the PC industry/technology looked like, all of it is being thrown away, or atleast rethought inside-out.

    And guess what, lots of legacy code is going to be rewritten just to take adavantage of the shift in hw. :lol: Who could have imagined that in the RISC vs CISC days?:grin:
     
  14. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Moore's law is about tranny density, not about bandwidth. :wink:
    Well, if indeed the bw/pin growth is stalled (as seems likely), then next gen gpu's will also migrate towards deferred renderers, if not TBDR's outright and Intel's major advantage will be wiped out as well. The big question is what can amd and nv come up with.
     
  15. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,415
    Likes Received:
    869
    Location:
    en.gb.uk
    Maybe it's my old age and/or ADD kicking in, but I'm missing the bit where this push-back has been acknowledged, purely or in significant part, to be down to h/w cache coherency and/or x86 "overhead". I wonder if there might be some quasi-religious projection going on in this thread ?!
     
    #235 nutball, Dec 5, 2009
    Last edited by a moderator: Dec 5, 2009
  16. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    From what I've read, not only is all "Larrabee" hardware dead and buried, but Intel is even abandoning the in-house project name "Larrabee" itself. What's left from the Larrabee project is the software, which Intel says it will continue to use--as exclusively an x86 multithreaded software development platform. As it dovetails nicely with Intel's ongoing cpu R&D, this makes all the sense in the world--certainly a lot more sense than Larrabee ever made before, imo.

    I also think Intel realized that to continue to talk up canned, in-house tFlop benchmarks, and the selected snippets of "real-time ray tracing" that we've all seen, is creating expectations in the public mind that Larrabee was never going to meet. Just like with Rdram, Itanium on the desktop, and Prescott shipping at 5GHz, Intel has hit yet another dead-end brick wall. This is a classic example of what happens when your PR runs amok and has little relationship with the state of your hardware development: expectations are created that cannot be fulfilled.

    I don't think it's entirely Intel's fault, though. I can't remember a time when I've read so much hype and over-the-top speculation from the tech journalist community about a piece of vaporware. Larrabee has got to take the cake in that regard, imo.

    Last, I surely do not think Larrabee will be "reset" in 18 months...;) That's purely wishful thinking, and it comes mostly from those tech journalists who have been telling us every chance they got in the last two years how "revolutionary" Larrabee was going to be--any day now--as soon as it is released--ASAP, etc. ad infinitum. It seems to me a very insincere and smarmy way to CYA. Hopefully, though, the Larrabee debacle will make these folks a lot more prudent in the future, so that they don't get so excited about PR. When you've got the functioning hardware in your hand, and you've got a firm release date from the manufacturer--well, *that's* the time to get excited.
     
  17. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    I think there's too much of an investment in rasterization for any IHV to move away from it entirely, so new programming paradigms are unlikely. More flexible and extensible programming models however, are absolutely likely. TBDR doesn't seem like the answer to me, the last GPU that used tiling was Xenos and we all know how much devs love tiling ;)
     
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,696
    Likes Received:
    2,627
    Location:
    New York
    Given that it was little more than a science project, no I do not think they had multiple teams devoted to future iterations for the last 18 months. Why would they dedicate so many resources to an unproven product?
     
  19. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    These two weren't the show-stoppers, I agree, just 2 outright bonkers decisions with no technologically redeeming features.

    A bit, yeah..:wink:
     
  20. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Yes.

    They don't have a choice in this regard. AFAICS, it is the only way forward, though approaches like IMG's may hide the pain, if any.

    Sooner rather than later, they'll have to stop drinking the forward rendering kool-aid.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...