Intel Larrabee set for release in 2010

Discussion in 'Architecture and Products' started by B3D News, Jun 22, 2007.

  1. B3D News

    B3D News Beyond3D News
    Regular

    Joined:
    May 18, 2007
    Messages:
    440
    Likes Received:
    1
    Information Week reports that during a conference at their headquarters, Intel spoke to analysts and reporters about their future energy-efficient products, including Larrabee and wireless solutions. Interestingly, Justin Rattner noted that Larrabee will be their "first tera-scale processor" and that it is aimed at a 2010 release, or possibly 2009 if things go especially smoothly.


    Read the full news item
     
  2. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,001
    Likes Received:
    231
    Location:
    UK
    One thing I didn't point out in the news piece that is incredibly important from my point of view: by 2010, GPGPU will be an established, super-high-volume, super-high-margins market with an existing infrastructure and existing solution providers partially tied to current hardware.

    That's not an advantage for Intel: it's one hell of a disadvantage if they're "just competitive". Assuming they are much faster, however, then it doesn't really matter. But it does up the stakes a lot, IMO.
     
  3. Bob

    Bob
    Regular

    Joined:
    Apr 22, 2004
    Messages:
    424
    Likes Received:
    47
    It's 2010 now? I can't wait to get my hands on it!

    *Bob marks his calendar*

    So where do I start a line to buy this thing?

    ----

    More seriously though, why do companies divulge their roadmaps three years ahead? That never made sense to me, unless you're in a situation where you have nothing else to lose. AMD, I can understand. Intel doesn't seem to be in that position though.
     
  4. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    748
    Location:
    NY, NY
    it can also be used to throw others off the mark too. Larrabee now is not what larrabee will be 3 years down the road, its just a name. But it gives the markets something to talk about in the meantime.
     
  5. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,001
    Likes Received:
    231
    Location:
    UK
    Well, to their credit, Intel has always released information on research projects pretty damn early. However, in this case, I think the problem is that both NVIDIA and AMD managed to make investors and analysts believe that GPUs are an increasingly important part of the ecosystem. There are two sides to that: one is integration and the other is GPGPU. Both are getting hyped up like crazy by NVIDIA and AMD during investor briefings, although in slightly different ways for both companies.

    So, either Intel reveals that they do have a roadmap that will directly compete with those solutions and gives some basic details about that roadmap to create some basic confidence, or they risk having analysts and investors predicting that long-term, they will some significant marketshare in both HPC and the entry-level part of the market. That's obviously not an option.

    Another thing to consider is that in the initial phases of the project, it wasn't just about selling the concept to investors. It was also about selling the project to management. Why should management invest in a GPU? Why should they even care, rather than... you know... just pump up their CPUs to 10GHz instead? Presumably and as far as I can tell, some of the initial presentations given at universities were related to the necessity of showing management that there would be momentum behind the idea.

    The internal company politics around Larrabee are obviously not 'public' per-se, but some of the key pieces already leaked out a long time ago, and just slipped through everyone's radars. Basically, Keifer (core of Kevet, or was it the other way around? heh) was cancelled in favour of Larrabee. The former was less FP-heavy than the latter, and obviously could never have dreamed of replacing a GPU. But it would have been an interesting and direct competitor to Sun's Niagara.
    http://www.theinquirer.net/default.aspx?article=32776
    http://www.tomshardware.com/2006/07/10/project_keifer_32_core/

    Also, it's funny you say this, because Jen-Hsun said pretty much the same thing at analyst day on the 20th. More interestingly, he added that while he didn't want to speak about their integration strategies in that timeframe, "they do have them".

    I guess if you're ready to invest seriously in it (and hey, this is the same company which is putting *500 employees* on a single application processor project, for a business unit that is currently significantly loss making!) there's nothing that prevents you from putting together a x86 processor together in 3 years or maybe even less. With the major condition that you won't be able to compete in the high-end with that, and even less so on a foundry process. However, for a single-chip solution, you don't need to either.

    P.S.: I still have an old transcript I put together of Jen-Hsun talking about his opinion on single-chip integration and related issues for more than 10 minutes or so. It's an interesting opinion, and certainly makes sense from their point of view. Maybe I should release that transcript out in the wild eventually, since I don't think the audio is even publicly available anymore... I really do love the point where Jen-Hsun was jokingly arguing that building a x86 core is easier than walking 2 miles! :eek:
    P.P.S.: Errr, apparently, this post got quite a bit bigger than I thought it would. Oopsie. Do I get a cookie at least?
     
  6. Techno+

    Regular

    Joined:
    Sep 22, 2006
    Messages:
    284
    Likes Received:
    4
    we would love to read it!, no matter how big!
     
  7. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    If you think about it, x86 CPU design hasn't changed all that much since the Pentium Pro. GPU design on the other hand has changed tremendously in the last ten years. So building a x86 core is probably not that hard. Building one that can compete in price/performance/wattage compared to Intel's chips is a different thing. But I'd love to see the context in which Jen-Hsun said this...
     
  8. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,001
    Likes Received:
    231
    Location:
    UK
    It is not my impression Jen-Hsun ever worked on a x86 design, however. I think he worked at AMD on non-x86 microcontrollers/CPUs, at least based on the little information I could find on the subject - and that was in the 80s, way before the Pentium Pro. He does have a lot more first-hand experience than most CEOs though, and I agree that's not insignificant.
    The context was pretty much single-chip integration for the entry-level market, at least in the conference call where he talked about it the most (which is the one I wrote a transcript for). And obviously, there you're not looking for performance leadership, so it's pretty much what you meant when you said building *a* x86 core isn't that hard. Building a simple x86 core with fairly low IPC is hardly incredible nowadays.

    What is important there, however, is perf/mm² and perf/watt. But most of the time, a CPU with lower IPC will have higher perf/mm² and higher perf/watt. That's not really a law at all, but it is observable, and it is true mostly because, much of the time, getting incremental IPC improvements is just exponentially hard. Just looking at the size of the control logic between the original P6 and Conroe should make that quite obvious... But of course, you also need your IPC to be high enough, because even in that kind of market segment you do have minimum performance requirements.

    Anyway, here is one part of that transcript, it's quite long and probably more fit for a future analysis than just a copy-paste of the whole thing, but this should give you a better idea of what the context was:
    Afterwards, an analyst asked him if that meant creating a x86 core and if that was even possible for them to do, which is when Jen-Hsun responded the following:
     
  9. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    I think it's pretty clear that AMD started talking about Fusion immediately because they had a $5B merger to justify to folks, long before the financials really went to hell. But Intel has neither reason.

    2010 does feel pretty late to the party. I think if I was them I'd start pushing hard for an industry standard API.
     
  10. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    After the AMD merger closed a senior person (can't remember who) at AMD said 4 cores were the most they would go desktop CPUs.

    The programming challenges for massively parallel x86 are the same as for gpus are they not? So what is the point of adding more x86 cores to desktops when you can utilize the massive parallelism that already exists in GPUs.

    So if that is the case then what will happen to the cpu? It will shrink over time. Unless the gains from adding much more cache are worthwhile. As Jen-Hsun has intimated by calling CPUs just about perfect, it seems like Moore's law may be about to stop working in their favor.

    Obviously the opposite is the case for the GPU. Look at NVIDIA's integrated graphics. As other components have shrunk they have turned it into a single chip. It seems highly probable this will happen eventually with the cpu, and clearly NVIDIA has said they will pursue this if and when the time comes, but the question is when.

    What is the current die size of a single chip NVIDIA motherboard GPU? Is it economical/feasible for them to increase the die size in the future or is it a matter of Moore's law shrinking the CPU enough to fit in the current silicon envelope.

    The implications are very interesting because NVIDIA already makes very good margins on these products. Intel's entire business is predicated on much higher priced CPUs. If the industry moves this way, Intel into graphics, unless people start paying a lot more for graphics, Intel may have a lot of trouble adjust to the new economics. It's clear they aware of this - Otellini brought up the need to adjust the business to seliing silicon for $25 rather than $150 or whatever their current ASP is.
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,865
    Likes Received:
    2,156
    Location:
    Well within 3d
    There are fundamental roadblocks both share.
    What I'm curious to see is where the differences in the x86 come into play when it comes to how GPUs are not burdened with system management and exception handling.
    I'm curious how much of a penalty x86 will get on account that it has heftier threads, and where they fit in the overall system with Larrabee's GPU variant.

    Moore's law hasn't yet stopped working in favor of incremental gains in CPU performance.
    They don't get the crazy performance scaling GPUs get, but the demand for CPU performance has not diminished.
    Since GPUs are even with CUDA still subsidiary to a CPU, I wonder how often it is safe to scale the CPU back without throttling the GPU.

    Another future question is whether GPUs are going to take note of CPUs hitting performance walls due to heat.
    A lot of CPU designs are not limited by clock speeds so much as they are by TDP limits.

    Moore's law says nothing about performance scaling if heat becomes a more dominant concern for GPUs.
    When it comes to branching and granularity, CPUs are still more efficient in more complex usage scenarios.
    That is one problem that throwing on more functional units will not solve.

    GPUs as they stand seem to have a good amount of low-hanging fruit when it comes to power management.
    The current coarse clock management is not particularly impressive.

    Similarly, Moore's law doesn't apply to overall pinout and external bandwidth.
    Here, efficiency may become more important later on.

    Part of my opinion about Larrabee having a tough time in 2010 is based on the assumption that future GPUs become more efficient on a broader range of scenarios.
     
  12. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    But if AMD does not indeed have roadmaps for beyond 4 cores on the desktop that says something very interesting about the future, which is what we are talking about, doesn't it?

    If they are not going for more than 4 cores, how much performance can be gained from more cache and where does those gains hit a wall?
     
    #12 Voltron, Jun 25, 2007
    Last edited by a moderator: Jun 25, 2007
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,865
    Likes Received:
    2,156
    Location:
    Well within 3d
    Gains in transistor density haven't all gone into cache.

    AMD's roadmap for the desktop isn't the only partion of the CPU market it sells to.
    Servers will love having more cores.

    The limited amount of threading desktops have means having many of any type of core is unnecessary.
    Those roadmaps definitely do point out that the number of transistors per CPU core will still go up.

    AMD has also disclosed zero information on the successor architecture to its 10h family, so saying the CPU will shrink is without any known basis.
    It seems unlikely they'll scale back the features for that design, assuming AMD is still in the high-end game by that time.
     
  14. Rys

    Rys AMD RTG
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,099
    Likes Received:
    1,197
    Location:
    Beyond3D HQ
    Jon Stokes, by way of The Tech Report, says that Information Week got it irresponsibly wrong and Rattner didn't say 2010 whatsoever. I tend to trust the mighty Hannibal here, especially since he was there!

    http://www.techreport.com/onearticle.x/12765
     
  15. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Well, isn't that interesting. I think at this point it shouldn't have to come down to trust issues. . .you've got two major sources, informationweek and Ars, reporting two diametrically opposite things that Rattner supposedly said. Intel PR or Rattner himself should address the matter.
     
  16. Hannibal

    Newcomer

    Joined:
    Mar 19, 2007
    Messages:
    16
    Likes Received:
    0
    Ohboy..

    Arun et al,

    The quoted Infoweek article is just not correct. I was at Research@Intel day, and I was at the Rattner keynote, and he said that we would be in the "era of tera" or somesuch in 2010... i.e., this is when we'll start to see real products based on the Terascale stuff they were demoing that day.

    Larrabee is /not/ Terascale. Larrabee, for instance, is a ring bus architecture... just like Nehalem, and it's probably coming in the Nehalem timeframe.

    This Infoweek heard "Terascale products in 2010," made the erroneous connection to Larrabee, and then published "larrabee in 2010."

    Edit: Also, I just want to add this: Intel was not, under any circumstances, willing to talk about the L-word that day. It was just not a topic that anyone was cleared to comment on, and it did not come up... not at the keynote, not in the demos... never.

    Also, I hear that Tim Sweeney and Michael Abrash are NDA'd on Larrabee and that it's rocking their world. And really, what's not to love? Unlike NVIDIA's parts, Larrabee has a real ISA that can do context-switching and exception handling. It's what Sweeney has said he wanted since the 90's.

    I'm about to do an article in which I suggest that we're due for another turn on Sutherland's wheel of reincarnation, and that NVIDIA is going to get trapped under that wheel and crushed.
     
  17. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,001
    Likes Received:
    231
    Location:
    UK
    That's interesting. However, are you really so sure that Larrabee is not part of Tera-scale, and on equal footing to Polaris? Everything I've seen and read so far indicates that Larrabee and Polaris are unrelated, but that both are part of the same overall initiative, which is named Tera-scale. You insist that Larrabee and Tera-scale should not be confused, but the same is true for Polaris and Tera-scale.

    The davis.pdf presentation made that obvious IMO, at least until the Larrabee slides were removed, heh. This news.com article also nicely implies what I'm thinking of here:
    This both directly contradicts Larrabee being aimed at the Nehalem timeframe and it clearly indicates that it is part of the terascale initiative (aka 'research projects'). So unless I'm missing something, it seems to me that this confirms Infoweek's inference.

    EDIT: And with all due respect, Tim Sweeney and Michael Abrash have been unable to predict industry trends for years. Why their opinion should be taken into consideration more than that of people having actually proved their understanding of the industry's dynamics is beyond me.

    And it is also beyond me why you believe NVIDIA is, for some kind of magical reason, unable to implement context switching and exception handling if they found it necessary in that timeframe. The former is what WDDM 2.0 is all about anyway.
     
  18. sonix666

    Regular

    Joined:
    Mar 31, 2003
    Messages:
    595
    Likes Received:
    3
    Larrabee and Terrascale are different Intel projects. Terrascale AFAIK isn't even based on x86.
     
  19. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    Sweeney seems like he has been at least somewhat on point with regard to future hardware trends and programmability and CPU and GPU convergence.

    That said, a company as focused on the future of 3D graphics as NVIDIA is, with a top to bottom product stack, seems very unlikely to be blindsided by a competitor's first effort, even if that company is Intel.

    NVIDIA's GPUs are, after all, increasing in programmability. Does anybody (Hannibal?) expect that trend to stop?
     
  20. Hannibal

    Newcomer

    Joined:
    Mar 19, 2007
    Messages:
    16
    Likes Received:
    0
    I'm sorry, but I don't think that CNET's Tom Krazit is any more reliable than the Infoweek guy. A case in point:

    And of course there's this quote, which you also highlighted:

    So in the above quotes, Gelsinger appears to have positioned Larrabee in opposition to Fusion. It also appears that Larrabee will involve an on-die GPU. But then there's also this passage in the article:

    So in this quote, Gelsinger never actually mentioned Fusion explicitly... so did he or did he not intend to position Larrabee in opposition to Fusion? And even better, it now appears that Larrabee does /not/ involve an on-die GPU....

    My point here is that Talmudically parsing second-hand summaries from people like Tom Krazit to get time-frame information (or any kind of real information, for that matter) is a complete waste of time, and gets you nowhere.

    That having been said, I'm willing to concede that Larrabee may fall under the general auspices of "Terascale," insofar as Terascale is so huge and vaguely-defined that Intel call call anything that involves any aspect of many-core, interconnects, 3D die stacking, etc. "Terascale." But I'm not willing to concede that Larrabee is coming out in 2010.

    An as someone who was at the event, I can tell you that it is definitely a fact that Larrabee was not mentioned in that keynote. So I think that this 2010 business is bollocks.

    Also, you can think what you will about Sweeney and Abrash. I only mention their names because I heard about what they think second-hand. As for people who I've heard from first-hand who're NDA'd on Larrabee and are impressed, I won't get into that... but I do know three of them, for what it's worth.

    About NVIDIA not being able magically address these ISA issues: sure they can! They can give their parts a real ISA and have a go at it. But unless that ISA is x86, I honestly don't think it's going to be that compelling in light of the advantages of Larrabee's x86 compatibility. x86 compatibility is the killer feature for Larrabee that makes it truly interesting from both a graphics and HPC standpoint.

    Edit: Just to be clear, what I'm objecting to with this "Larrabee is not the first Terascale product" thing is the mistaken idea that Terascale is the codename for a specific product family, and that the first commercial instance of that product family is a particular part that's codenamed Larrabee. That's what the author of the Infoweek article thinks, and he's incorrect.
     
    #20 Hannibal, Jun 29, 2007
    Last edited by a moderator: Jun 29, 2007

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...