GPGPU and 3D luminaries join 3D graphics heavyweights

Discussion in 'Rendering Technology and APIs' started by B3D News, Jan 7, 2008.

  1. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    I'd strongly disagree with this point of view. Perhaps some (maybe even many) do, but there are some who do not. In general, I think the most aggressive architects have not given up on higher performance. Actually, the most aggressive architects have not given up on higher single threaded performance.

    Yes Intel needs to figure out how to get 16 cores to work on a die from a SW perspective. However, that doesn't require a product.

    There is a huge difference in cost between an internal research project and a product that you sell to end-users.

    Additionally, doing something too early can be very costly if the ROI isn't high enough.

    Right now AMD's primary concern should be Nehalem and Sandy Bridge, not Larrabee.

    That would be a disaster, the good news is that Intel can survive disasters.

    I certainly hope that Larrabee turns out better than Itanium. The last time Intel took their eyes off the ball, they ended up letting x86-64 slip through the door.

    DK
     
  2. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Just going by various admittedly dated presentations a while back where both AMD and Intel drew the line for homogenous multicore of big OoO cores at 4-8 for the desktop, with the provision that server and other segments could do with more.

    But it does mean Larrabee's design cost is incremental with research that would be ongoing anyway.
    Intel has to put the pedal to the metal sometime.
    Itanium already showed what can happen if you only rely on internally formulated means of evaluation for a design.

    There's a decent chance that it will gain a foothold in a few areas, and it can serve as a bulwark against competitors that are rapidly running out of choices when it comes to competing.
    AMD and Nvidia have a high flops part that can establish a niche if not contested, and ceding a potentially lucrative market is handing them free money.

    There's more gradation on the scale from internal project to a massive cross-market product ramp.
    The plans are tentative enough and far enough ahead that Intel can scale back production and expectations as needed (unless its ray-tracing uber alles marketers go unchecked for the next year or so, that is).

    I suspect this is the likely outcome, but if that's the case they may only be off by a year or so.

    Sure, but it's not in Intel's best interest to refrain from adding more worries.
    It's not Intel's fault that Bulldozer's been delayed into that timeframe.

    For AMD's GPGPU line, Nehalem and Sandy Bridge are obstacles, but they are not directly targeted at similar workloads. The order of magnitude performance gap for customers that have tasks amenable to GPU boards won't be removed by the next CPU generation or two.

    Larrabee, if targeted correctly, can undercut both Firestream and AMD's attempt to appeal to HPC with SSE5 and its new instructions. This of course assumes that AMD's delay of Bulldozer isn't because AMD chickened out and is yanking its second set of FP extensions Intel refused to recognize.

    It would be a disaster if all of that fails, but it's more of a form of graceful degradation from "not so good" to "pretty bad" to "utter disaster".
    The failure would be mitigated in part by the fact that a fraction of its expense is overlayed with otherwise unavoidable expenditures.

    Intel has to fail to gain a foothold in multiple market segments for Larrabee to fail.
    For a complete disaster, it has to fail to garner any devrel or mindshare among developers that serves as groundwork for future software that will most likely tread the same path again.

    Depending on the competitive environment, it may only need to keep its eye on the ball part of the time.

    I don't seen any yawning gaps Larrabee creates in Intel's product lines, and it goes along way in plugging up a few avenues of attack.
     
  3. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    The available evidence suggests to me that Larrabee was aimed more at Nvidia and ATI than AMD, and that the AMD acquistion of ATI and launching of the Fusion project was, in part, AMD's reaction to figuring out where Intel was going.

    We just knew more, sooner, about AMD's efforts because it involved a multi-billion dollar acquisition that forced them to talk about them at length in public much sooner, almost certainly, than they'd have done so if that factor wasn't involved.
     
  4. heliosphere

    Newcomer

    Joined:
    Jun 15, 2005
    Messages:
    142
    Likes Received:
    15
    I heard from a reliable source that Abrash is working on Larrabee as well.
     
  5. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Well, we have one "proven" design (Cell) and one that goes the same direction, but foregoes the things that make Cell work: large local stores instead of a unified (cached) memory structure, and a new ISA.

    Without a close look at a first sample in action, there isn't much we can say about the validity of having all those cores crunch away in a meaningful manner.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...