Intel Larrabee set for release in 2010

Discussion in 'Architecture and Products' started by B3D News, Jun 22, 2007.

  1. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,551
    Likes Received:
    24,483
    I meant in it's current incarnation. :wink:
     
  2. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Easy answer and one that bears constant repetition. The point of such statements has nothing to do with what the state of things will be like in three years; rather, it has to do with how the company is perceived by investors *today*. High tech companies battle as much for mind share, never forget, as they do for market share. As you correctly surmise, talking in glowing, hyped-up terms about technology you are not even close to having finished, let alone close to being able to ship, has zero value for today's consumer. Nor is it especially informative of the actual state of things to come, as many, many such "leaked" product development cycles are discarded before they were "scheduled" to arrive, in deference to the "something else" that actually ships at the appointed time. The point of all such predictive chatter is to hype the company today, and whether said technology actually ships is often if not always completely beside the point. While most high-tech companies do this sort of thing from time to time, Intel seems to me to be one of the worst offenders in this regard. Basically, these days when I hear chatter that "Intel says this is what it will be doing in three years" I just let it go in one ear and right out of the other...;)
     
  3. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land

    Well, duh! It's current incarnation is only 3.5 months old! :razz:
     
  4. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    While I don't disagree with this, I think it isn't always about investors. If you need major support elsewhere in the chain you need to rally/interest/educate those other sectors too, and that also takes time. Whether it be ISVs, OS API, tools makers, whatever.
     
  5. Hannibal

    Newcomer

    Joined:
    Mar 19, 2007
    Messages:
    16
    Likes Received:
    0
    I'll respond to Arun's points about JIT later, but I wanted to throw this related point out there.

    It seems to me that Arun and others are correct to sumrise that Larrabee won't really be competitive with whatever NVIDIA has out at launch when it comes to traditional raster rendering. In fact, I sort of take that as a given, simply because in spite of whatever generalization trajectory NVIDIA is now on, their leading-edge part will always be more specialized than Larrabee.

    The promise of Larrabee as a GPU, it seems to me, is in the real-time ray tracing stuff that Intel is developing, and in how that combines with raster and other techniques to yield the possibility of new types of eye candy (or physics, AI, etc.). At least, Intel spent enough time hyping RTRT a their Research@Intel day that this is clearly the direction they're headed with Larrabee.

    So what I envision as the post-Larrabee horserace won't so much be an Intel Larrabee GPU vs. an NVIDIA GXX GPU duking it out for FPS scores on the exact same game engine, as it will be Larrabee (which is "GPU" only in scare quotes) plus whatever "software" engine someone builds with it vs. a bone fide, raster-rending GPU from NVIDIA plus the state-of-the-art hardware-accelerated raster engine at the time.

    My ultimate point is this: I'm betting that when people look at the kinds of games you can build with Larrabee vs. the kinds of games you can build with the NVIDIA GXX, they may actually like the Larrabee option.
     
  6. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    I think you might be putting some misplaced faith in Larrabee--it has to win meaningful market share to get developers to write for it (since I'm guessing it's not just going to be Direct3D), but there have to be games for it in order for anyone to buy it as a graphics board.

    Also, it's not completely impractical for a GPU to be used as a raytracer, and that will probably be a lot more true by the timeframe we're talking about for Larrabee (which I've put at absolutely no earlier than Q1 09, more likely Q3 or Q4).

    I'm not trying to poo-poo Larrabee, but it's going to have to be so much better than any rasterizer out there to have a chance of catching on at this point.
     
  7. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Regarding raytracing, I'm really not convinced Larrabee will have any advantage against G100/R800/R900. Last I heard, most of its processing power is still SIMD, and has all the related disadvantages. How is that any different from G80/R600? And worse, why would anyone believe that this will be superior to true next-gen GPUs?

    I don't think Intel's focus on raytracing is because they are confident they will be incredibly good at it. Quite on the contrary, it is because they predict NVIDIA and AMD will be bad at it. But why would anyone believe that? Unless Larrabee has fixed-function units focused on raytracing, that is completely ridiculous.

    There is nothing magical about Larrabee that makes it superior for raytracing to other SIMD processors out there, besides the fact that they *might* have some fine-grained control mechanisms in addition to the SIMD processing power. But what's to prevent NVIDIA and AMD to add that to their GPUs in due time? I'm not even convinced it matters so much, anyway.

    Then think about something else: raytracing doesn't shade your scene. It replaces one specific and small part of the pipeline, which is rasterization. And that's it. Even if it was MUCH faster at raytracing for some kind of mystical reason, it would still be MUCH slower at shading. So unless your definition of great graphics is chrome (read: full reflection) everywhere, and I'm sure it is for some people, your super-fast raytraced scene is just going to look like shit. Good luck getting momentum with both developers and gamers in that case, except for niche games that nobody will ever play anyway.

    So either that's not Intel's strategy, or they should give up on their strategy immediatly, before it is too late and the entire world will mock them for the next 5000 years. Well, I guess they're used to that with Itanium already, but at least they had contractual obligations as an excuse there... Note that I'm only talking about graphics here, HPC is another question entirely and the prospects there are obviously a lot more interesting, but depend a lot on Intel's execution, but also NVIDIA's and AMD's.

    If Intel does integrate efficient fixed-function units for graphics, things might be very different, but we'll see about that. The initial diagrams did indicate that, after all, although it wasn't clear what they were all about... So while I'm being quite harsh here, do keep in mind that it is all with a big 'IF', and based on the correctness of your assumptions.
     
  8. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    They spent a lof of time hyping RT but they really didn't tell us why we should get hyped about it. I don't think gamers will switch to Larrabee just cause they can get local reflection here and a bit of dynamic ambient occlusion there, given that rasterization can fake both right now.
    Moreover I wouldn't be surprised if in 2-3 years NVIDIA and AMD GPUs will allow us to implement an efficient ray tracer on them, even though I'd like to see from them some non uniform rasterization support.

    I think you need to make some pratical example here cause I don't see what RT will enable us to do that we can't do right now (or in 2 years).

    Marco
     
  9. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,496
    Likes Received:
    983
    Location:
    en.gb.uk
    I wonder what Microsoft's view on this battle might be? Given their vested interest in owning and controlling The Platform through DirectX I can't see them being happy to sit and watch Intel and NVIDIA duke it out, then siding with the winner. What you seem to be describing is a world in which NVIDIA provides hardware acceleration for MS's API, whilst Intel promotes an entirely different way of programming game engines (potentially bypassing MS's platform entirely).
     
  10. psurge

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    955
    Likes Received:
    52
    Location:
    LA, California
    Hi nutball,

    This is OT - I'm just starting to play with OpenMP a bit. There's one area that's really confusing me atm: how are OpenMP runtime errors communicated to a user program? For instance, say the runtime fails to create the desired number of threads, or a mutex etc... are there any standard ways to hook in an error handler (I'm not seeing any), or does the program just fall over and die?

    So I guess the whole error handling aspect of OpenMP and actually also OpenMPI seems a bit MIA to me... is this just newbness on my part?
     
  11. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,496
    Likes Received:
    983
    Location:
    en.gb.uk
    Basically the OpenMP standard presumes that the compiler will map the OpenMP directives onto an underlying threading infrastructure, and broadly speaking hands off all the exception handling issues to that infrastructure (without really giving you the application programmer the chance to intercept exceptions in any standard-compliant way).

    In the environments I've worked in, the OpenMP run-time presumes that's it's incomprehensible that one would run out of resources such as threads of mutexes (which is a 99% correct assumption), and dies horribly if it happens that you do.
     
  12. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    I still want to know what they are going to do about API support. For all the "as soon as x86 gets competitive they win because of their huge tail of infrastructure" works for them on gpgpu type apps, it works against them on the gaming/graphics apps if they aren't using the DX infrastructure. Have they started talking to MS about integrating what they need?
     
  13. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Why presume that it wouldn't / couldn't use DX?
     
  14. Bob

    Bob
    Regular

    Joined:
    Apr 22, 2004
    Messages:
    424
    Likes Received:
    47
    The argument used above is that Larabee will make NVIDIA and AMD obsolete because it's x86 and it can do ray tracing. As soon as you go DX, you lose both of these "advantages". However, to be compelling for gaming, Larabee must support DX-next.
     
  15. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    But that is an either/or situation. You can't forsee a scenario where it could offer great HPC capabilities, good RayTracing capabilities and passable/acceptible DX rendering?
     
  16. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    I'm not presuming it can't use DX at all. I asked the question because I don't see any reason to presume it *can*, or that if it does that the DX support will be the competitive advantage they are relying on. Most pointedly, I want to know how they intend ISVs to integrate the ray-tracing stuff on the software side. . . thru bog standard x86, thru DX, thru some proprietary api/sdk/etc that leverages x86, or thru some propietary software stuff that leverage something proprietary on the hardware side? (Did I miss an option? :smile: )
     
  17. Rys

    Rys Graphics @ AMD
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,182
    Likes Received:
    1,579
    Location:
    Beyond3D HQ
    This is entirely key to any headway this architecture makes in the consumer graphics market. It's also pretty clear that we just don't know enough about the architecture to even guess at its performance at this point. A huge amount of detail needs to be fleshed out before we can find a fit for it in the graphics space.
     
  18. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Maybe they'll sell first gen as a combo CPU and ray-tracing accelerator. Sort of an integrated PhysX. . .and you still use your Radeon/GeForce for the DX stuff.

    When I first started to write that, I was 1/2 playing around. The more I think about it tho, the more it seems to me it might actually be the best transition strategy open to them as they build ISV support. Leverages the enthusiast market penchant for dropping large sums for niche type of advantages (at first anyway), while not having to compete head-to-head for DX performance with the best NV and AMD can provide in the timeframe. Also allows (encourages even) ISVs to build in such support as an add-on/switch-on for the enthusiasts, which is likely to be a necessary strategy for them as well as they build their market penetration.
     
  19. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Do you mean using Larrabee as a gamer's CPU?
    Larrabee would run right into Gesher with the current time frame.
    Intel's own product lines would bump heads.

    It's not a new situation, but one that would more likely threaten Larrabee (niche maybe performance leader in certain workloads) than Gesher (broad-market, broadly consistent workload performance on new and old apps).
     
  20. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Right--it isn't "only" about investors, although this kind of claptrap has proven itself very effective "analyst bait" many, many times in the past. Usually, when you see an analyst talking in glowing terms about some future technology that isn't close to being finished or close to production, you know right away that this kind of PR is the source of his so-called "inside information." What's in it for the analyst, of course, is that when people read this sort of stuff, and believe it--which many of them do--then they go out and buy stock based on the "inside information" furnished by the analyst. This has the effect of personally and professionally enriching the analyst's reputation, which is exactly why this sort of thing is done regularly. Whether or not the projected product ever ships at the appointed time becomes entirely secondary and amounts to little or nothing in the end as few will remember who said what about a product three years before it was *unofficially* scheduled to see the light of day.

    I can't say I agree with the "education" idea, though, as I think it's difficult to educate people on something which does not, and may never, exist--although, it's certainly true that there's a lot of "educational info" out there on things like UFO's, etc., for the people inclined to think they are real...;)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...