Larrabee delayed to 2011 ?

Discussion in 'Architecture and Products' started by rpg.314, Sep 22, 2009.

  1. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    http://www.brightsideofnews.com/new...er-left-intel-because-of-larrabee-fiasco.aspx
    WTH is going on with lrb and intel? Are they gonna make their 2010H1 deadline or not? If they slip to H2, they'll quite likely have to put up with 32 nm shrinks of r8xx and prolly 32 nm shrink of gt300 too. Fighting them on a 45 nm chip would be hard, but may be, the delay is because they will go straight to 32 nm, in which case they'll have an advantage if they are able to launch in say Q1.

    EDIT:

    A bit later I find this,

    http://www.semiaccurate.com/2009/09/16/new-larrabee-silicon-taped-out-weeks-ago/

    :shock:

    If they have indeed quashed a number of bugs, I suppose they could launch in early 2010.
     
  2. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,479
    Likes Received:
    386
    Location:
    Varna, Bulgaria
    Most probably Intel wants a smaller die foot print for the monster, to be more competitive on the market and consequently this will give them some extra time to polish the software graphics "pipeline" and the driver model.
    After all, there are so mush APIs to validate for. :D
     
  3. compres

    Regular

    Joined:
    Jun 16, 2003
    Messages:
    553
    Likes Received:
    3
    Location:
    Germany
    I have high hopes for this project but realistically can't see them outperforming nVidia or ATI in the grpahics only applications.

    Would be interesting if we could have larrabee and ATI/nVidia + some nice AMD or Intel CPU next year. The masochists like me can have fun programming 3 processors XD
     
  4. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    OpenCL FTW :)
     
  5. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,413
    Likes Received:
    174
    Location:
    Chania
    Will AMD even use 32nm or will they go directly to 28nm?
     
  6. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    I think we'll prolly see 32 nm "dumb shrinks". Mixed into the same sku's of 40 nm if need be.
     
  7. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,413
    Likes Received:
    174
    Location:
    Chania
    What for? I mean there's little doubt that AMD will go for 28nm at Globalfoundries. Is the hussle to change to 32nm@TSMC libraries even for a a dumb shrink worth it?
     
  8. SiliconAbyss

    Newcomer

    Joined:
    Mar 28, 2004
    Messages:
    75
    Likes Received:
    0
    Location:
    Canada
    Larrabee is chasing a fast moving target, the chip seems more and more like an albatross. By time time it comes out, AMD might be coming to market with their Fusion GPGPU, and by that time Nvidia will have who knows what.

    Intel should stick to what they are good at, bribing OEMs um I mean making CPUs.
     
  9. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    By the time it arrives, they probably will have ditched the pathetic x86 ISA.
     
  10. compres

    Regular

    Joined:
    Jun 16, 2003
    Messages:
    553
    Likes Received:
    3
    Location:
    Germany
    I sure hope they ditch the abominable x86 ISA. Why is this needed if the chip will be incompatible with their CPUs anyway?
     
  11. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    To me lrb demonstrates convincingly that you'll have to pry x86 isa from intel's cold dead fingers. They do have an ARM license though. bribing ARM to let them slap a vpu onto a simple in order arm core might be the way to go, after all these promises of programmability.
     
  12. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    @MODS:

    Can you please change the thread title to "Larrabee delayed to 2011?". After all, it is not a fact yet, just Theo's piece.
     
  13. itaru

    Newcomer

    Joined:
    May 27, 2007
    Messages:
    156
    Likes Received:
    15
  14. spacemonkey

    Newcomer

    Joined:
    Jul 16, 2008
    Messages:
    163
    Likes Received:
    0
  15. spacemonkey

    Newcomer

    Joined:
    Jul 16, 2008
    Messages:
    163
    Likes Received:
    0
  16. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    Raytracing Quake Wars in real-time really looks mighty impressive to me compared to the spinning cubes I've seen so far from other vendors. At least it shows they have some magic sauce that makes their architecture more efficient. The Radeon HD 4890 versus GeForce GTX 285 also proves that it's not all about the TFLOPS.

    Whether the things that give it a massive advantage at raytracing will also help it with classic rasterization is a different question though. And while ATI is doing nothing to combat Amdahl's Law, the rumours surrounding GT300 being a total redesign suggest it still has a chance at stealing Intel's thunder...
     
  17. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    I just looked for a representative video and found this:
    http://www.youtube.com/watch?v=mtHDSG2wNho

    It doesn't look any better than Crysis IMHO.
    Apart from the useless reflective floating spheres it would be hard to tell that this is raytracing.
     
    #17 Voxilla, Sep 23, 2009
    Last edited by a moderator: Sep 23, 2009
  18. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    That's completely irrelevant. Other chips simply can't raytrace such a scene in real-time.

    It indicates that Larrabee is vastly better at adapting to tasks other than the classic rasterization pipeline. But since the rasterization pipeline has also become highly programmable I expect them to also have certain advantages when rendering Crysis. The only question is whether that translates into an advantage in absolute numbers or not.
     
  19. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,170
    Location:
    La-la land
    I'd say it's completely irrelevant wether other chips can't raytrace such a scene in real-time if larrabee's raytracing doesn't look one iota better (or run faster) than other chips' classic rasterizing - and looking at this video it DOESN'T.

    Apart from the surprisingly fluid framerate this realtime rendering completely underwhelms. The lighting is extremely flat, surfaces look very flat and matte even when close up, the water looks incredibly sluggish and unrealistic (kind of what I'd imagine a sea of transparent mercury would look like).

    Perhaps. But if it gives no real-world visual improvements (other than more accurate reflections) then it's a pointless advantage. A decidedly ho-hum IQ raytraced scene (at not incredibly high framerates) simply don't push any buttons for me.

    Geeking out on realtime raytracing is all good and well for some computing nerds. It might even find a decent niche in some professional market segment. But it won't last in the consumer market if it can't compete with traditional rasterizers in traditional rasterizing titles at the same or better cost/framerate as its competitors. Larrabee has yet to demonstrate it can do that, and considering how long ATI and NV has had to develop the traditional rasterizer (and games devs developing software for them), I'd say intel has its work cut out for it...

    It's a very interesting tech though, but the more time passes without larrabee delivering anything substantial makes you doubt the viability of the entire concept.
     
  20. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    I don't give a rat's ass about programmability if the chip cant deliver better IQ @60 fps than another chip that runs 3Dx.y, whichever way you render it (raytracing, photon mapping, radioisity, rasterization, funny rasterization algorithms.........).

    LRB/GPUs/swift shader etc. have to win on better IQ/$/Watt @60fps, PERIOD. Nobody cares about features/programmability if they aren't in D3Dx.y. Unless it delivers on the above metric, it wont sell in enough volumes to sustain R&D on it, even if you are intel. Good luck sustaining it on HPC market alone.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...