22 nm Larrabee

Discussion in 'Architecture and Products' started by Nick, May 6, 2011.

Tags:
  1. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    Hi all,

    Since Intel's 22 nm FinFET process technology will be production ready at about the same time as TSMC's 28 nm process, I was wondering if this means Intel is actually two generations ahead now.

    I think this could give them the opportunity to launch an improved Larrabee product. The inherent inefficiency of such a highly generic architecture at running legacy games could be compensated by the sheer process advantage. Other applications and games could potentially be leaps ahead of those running on existing GPU architectures (e.g. for ray-tracing, to name just one out of thousands).

    In particular for consoles this could be revolutionary. They needs lot of flexibility to last for many years, and the software always has to be rewritten from scratch anyway so it can make direct use of Larrabee's capabilities (instead of taking detours through restrictive APIs).

    It seems to me that the best way for AMD and NVIDIA to counter this is to create their own fully generic architecture based on a more efficient ISA.

    Thoughts?

    Nicolas
     
  2. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    113
    Location:
    New Zealand
    Maybe we'll see a rebirth of 'Larrabee in consoles'? :mrgreen: although I had thought they had abandoned it completely...
     
  3. HellFire_

    Newcomer

    Joined:
    May 15, 2005
    Messages:
    26
    Likes Received:
    0
    This is the latest Knights Ferry Tech-Demo I could find. A real-time ray-tracing of Wolfenstein running on 4 Knights-Ferry Servers and to be honest it still looks like shit...

    http://www.youtube.com/watch?v=XVZDH15TRro

    Based on that, I don´t think Intel offers a viable solution for next-gen consoles. 22nm won´t help that much I think.
     
  4. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,118
    Likes Received:
    2,860
    Location:
    Well within 3d
    Larrabee in this context would probably be compared in terms of its rasterization rates, not ray-tracing. The ROI for ray-tracing at this point would be an exercise in how many stumbling blocks you can put in the way of a good process.
    Using Larrabee primarily as a software rasterizer would probably get more competitive results given the workloads it would probably encounter.

    The FinFET's benefits are interesting to consider. At the same process node for a low-power device, the 20-30% gain in power efficiency could negate the ~20% inefficiency in being x86 versus some other less cumbersome ISA.

    GPUs would probably be at a higher voltage realm, where the benefits are over 18% but probably less than the maximum 50% improvement over 32nm.

    Density-wise, it would be an improvement. Historically, I would characterize Intel's density figures in this particular segment to not be an advantage, even with a node advantage. Cayman, for example is much denser than Sandy Bridge.
    Larrabee's density was pretty bad, but this may have been due to a lack of optimization in physical design.
    Without knowing how much Intel would try to optimize, 22nm would still leave Larrabee 22nm at a marked disadvantage. I would be mildly curious if it would beat the densest 40nm GPUs.

    As an aside, Intel will have the distinction of having the first 22nm GPU in IB.

    The power advantages to the process would be notable if it were facing off against a similar generic manycore with only a different ISA.
    While the ISA probably contributed a measurable deficit to the power and performance gap, I have stated my suspicions before that it's really not the biggest factor.
    The possible longer maturation period for the novel process may delay the deployment of a chip of Larrabee's size.
     
  5. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    It is not at all obvious that a 22nm lrb will be a straightforward scale up of 45 nm lrb. They may very well choose to constrain the architecture or ditch x86 for the next rev.
     
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,118
    Likes Received:
    2,860
    Location:
    Well within 3d
    Larrabee 3 could be very different.
    I'm not sure if there is to be a GPU card based on Larrabee that it would depart from Intel's x86 above all else mantra, and the returns on a new ASIC taking on established titans may not be too great.

    My question is whether Intel even wants to make a discrete card anymore, and it still seems to be pitting the onboard GPUs in its current and future CPUs against what should have been the introduction of on-die Larrabee(ish?) cores. A lack of consistency and support could lead to a repeat of the original embarrassment.
     
  7. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    While some kind of x86 presence seems realistic, it might not be fused with the vector cores at instruction stream level.
    The discrete market is obviously declining, but I'd expect a discrete product in the beginning, if only to lower the risk of shoving it onto their shiny cpu's and have both fail.
     
  8. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    In my opinion Intel has completely abandoned the idea of producing Larrabee GPUs. If Larrabee would be a viable architecture, Intel would have used it in Ivy bridge, but they don't.
     
  9. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    And personally I would have loved it would have been used in Ivy bridge, because for us rendering specialists it is a dream.
     
  10. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    They did claim a 50+ core part would be out on 22nm.
     
  11. moozoo

    Newcomer

    Joined:
    Jul 23, 2010
    Messages:
    109
    Likes Received:
    1
    >They did claim a 50+ core part would be out on 22nm.
    But its not a GPU as such. Purely a computing accelerator to compete with Nvidia Teslas. I don't see how that can be comercially viable without a mass market GPU product line to pay for the develoment costs.
     
  12. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Entirely depends on just how competitive their renderer is.
     
  13. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    Also depends on how deep their pockets are, think Itanium.
     
  14. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Well, then they would definitely release a discrete part. :)
     
  15. Pressure

    Veteran Regular

    Joined:
    Mar 30, 2004
    Messages:
    1,329
    Likes Received:
    261
    Didn't they bascially say that they carved it up as experience for future IGP designs?

    However a dedicated card that could be used as GPGPU for professionals would be grand. Many professional applications (video-editing, photographing etc) could really use the power.

    Nothing I hate more than waiting for a render to complete.
     
  16. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    Yes, but it wont be a consumer product.
     
  17. Farid

    Farid Artist formely known as Vysez
    Veteran Subscriber

    Joined:
    Mar 22, 2004
    Messages:
    3,844
    Likes Received:
    108
    Location:
    Paris, France
    That Wolfenstein demo is the ugliest thing graphics related I've seen in years... The last time I've seen such an ugly display of misguided graphics technology was, well, the Quake 4 demo for ray-traced graphics by Intel.

    Seriously, that they intend to show that ray-tracing is possible on their architecture is nice, but their obsession to only show that is quite puzzling to me. What's the point in showing an unrealistically and perfectly reflective car or chandelier when it doesn't look good, and more than all, the surrounding environments look worse than many PS2 era games?

    Couldn't they put together a demo that displays modern, good looking, environments and models that use all the raster effects we expect from a modern day DX11/OGL3 card, with as a plus, some tasteful ray-traced effects where it makes sense to have them? And more importantly, the ray-traced effects have to look better than the current high-resolution cube-mapped effects used for reflections. Nobody in the gaming industry cares about physically correct light bounces, graphics rendering is about make-believe end results.

    With Itanium, Intel had tons of markets and partnerships (ISVs and solution vendors) they could leverage to push their chips. For a Larrabee GPU, they'd have to pursue new markets (Graphics workstations, for one).
     
  18. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    It doesn't make sense to have x86 cores with different features. It looks like they plan to add LRBni type instructions to AVX though.

    AVX is specified to support register widths up to 1024 bits. So they could relatively easily execute 1024-bit vector operations on the currently present 256-bit execution units, in 4 cycles (throughput). The obvious benefit to this is power efficiency. Then all that's left to add is gather/scatter support and the IGP can be eliminated, leaving a fully generic architecture that is both low latency and high throughput. Larrabee in your CPU socket, without compromises.
     
  19. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,796
    Likes Received:
    2,054
    Location:
    Germany
    The way you're putting makes it sound so easy… :)
     
  20. CouldntResist

    Regular

    Joined:
    Aug 16, 2004
    Messages:
    264
    Likes Received:
    6
    Well, Larrabee without compromises wouldn't be a Larrabee. x86 as a platform for GPGPU was a compromise to begin with.

    And then comes the SMT arrangement. Larrabee's 4 thread round robin seemed already as a compromise, in comparison to contemporary GPU's 100s of threads, out of order.

    In the picture you painted, Larrabee's original compromise is compromised even more. Now all you have is 2-way hyperthreading and huge OoO engine designed for singlethreaded workload. Hopefully, with smart coding, you can employ it's resources to execute multiple loop iterations concurently, if you dance around x86 memory model with enough care... Compromise, and not a pretty one.

    I guess SIMD width isn't everything.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...