GPGPU and 3D luminaries join 3D graphics heavyweights

Discussion in 'Rendering Technology and APIs' started by B3D News, Jan 7, 2008.

  1. B3D News

    B3D News Beyond3D News
    Regular

    Joined:
    May 18, 2007
    Messages:
    440
    Likes Received:
    1
    A couple of superstars in their own spaces have made professional newsworthy leaps. GPGPU pioneer and attractive hire for both companies we're about to mention, Mike Houston, and long-time (Direct)3D heavyweight, Tom Forsyth, have both taken up key positions at two 3D graphics giants.


    Read the full news item
     
  2. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Whee, "The Graphics Adults" are continuing to mosey on into Intel VCG.

    The "street cred" quotient over there is starting to look like critical mass. It's got to help in hiring a full team of devs when you have some names like TomF and Matt Pharr around. . . .the mid-level guys they need to hire a bunch of to actually make their efforts go are more likely to feel like this project is for real, and not just a likely waste (even if well recompensed) of two or three years of their career with those guys on board.

    Interestingly, TomF's blog includes a short review of Deano's Raytracing vs Rasterization article, and TomF agrees with him. Conjure with that.

    And congrats to Mike & AMD (for landing him), of course!
     
  3. mhouston

    mhouston A little of this and that
    Regular

    Joined:
    Oct 7, 2005
    Messages:
    344
    Likes Received:
    38
    Location:
    Cupertino
    "Luminary", "superstar" eh? I'm not sure I'd go that far, at least in my case, but I hope that I have helped push the field of parallel computation, GPU architecture, and GPGPU. Don't worry, I'll still roam the forums here and I'll try to answer what questions I can and to the best of my ability.

    Congrats on the Intel job Tom and good luck.
     
  4. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    Additional Tom was one of the last guys in the PC business who had still works on software rasterizer (Pixomatic).
     
  5. Sound_Card

    Regular

    Joined:
    Nov 24, 2006
    Messages:
    936
    Likes Received:
    4
    Location:
    San Antonio, TX
    Congratulations to you and Tom.

    Can you comment in any sort of general sence on what your vision is with AMD?
     
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    The link to TomF's blog doesn't seem to work for me.
     
  7. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    I thought Pixomatic was an Abrash's solo effort
     
  8. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Works fine here.
     
  9. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    This is why Bill Gates left Harvard early. It's easier to become a big fish in a big pond if you're first a big fish while the pond is small. :smile:
     
  10. SteveHill

    Newcomer

    Joined:
    Nov 11, 2003
    Messages:
    61
    Likes Received:
    0
    It was initially a joint effort with Mike Sartain (see http://www.ddj.com/architect/184405765 etc), with Abrash handling most of the lower level bits IIRC.

    I believe Tom has been contributing to a later iteration.
     
  11. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    Tom has got to be one of the few people in the world who can manage 1000wpm when typing :)
     
  12. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    :lol: Apparently TomF has noticed that we noticed.

    http://home.comcast.net/~tom_forsyth/blog.wiki.html See "Larrabee Decloak".

    I do feel the need to add that while Beyond3D's ninjas are rightly feared far and wide, the reality is the fact that he was joining Intel to work on Larrabee was disclosed publicly by TomF himself on an internal page to his site. We just happened to notice, and being the big mouths we are, of course had to share. :smile:

    This part is particularly interesting:

    This, of course, is where "street cred" comes in. Most people in the graphics world will take TomF at his word for that kind of thing.

    So, I guess we'll see. At any rate, for those of us *not* on the inside, but graphics enthusiasts right down to our little high-tech socks, it is indeed a comfort to see some of the names that have been disclosed in recent months as contribuing to the Larrabee project.

    What does the fact that TomF both seems to agree with Deano's analysis of Raytracing vs Rasterization, and assures us that Larrabee won't suck at conventional rendering actually mean? Well, it would seem to mean that Larrabee is about a good deal more than Raytracing performance. I'm sure we'd all like to hear more about that as we manage to tease and needle details bit by bit out of Intel and friends. :razz:
     
  13. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    The fact that Intel is still hiring sw rasterization experts make me at least wonder what those guys are cooking..
     
  14. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    On the one hand, my personal philosophy is to not expect much from revolutions in the making when their architects are in the "figuring out what to do" phase.

    On the other, my earlier skepticism concerning Larrabee's being inferior to dedicated GPUs rested in part on the assumption that a next generation of an AMD/Nvidia high-end chip wasn't just a reinterpretation of last year's (or just no high-end at all, depending on the manufacturer).

    I still expect growing pains for Larrabee I, though its successor should do better.
    GPGPU still looks to set to be marginalized, though consumer graphics seems like a tough nut to port over to Larrabee in a time frame of only a few years.
     
  15. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    What are you thinking, nAo? That they might do DX10 entirely in software other than texture samplers?

    Well, that's a bit of wisdom to be sure. My point above is I'm a little more comforted today than, say, when we published the Carmean presentation, about who is "trying to do the figuring out".


    That's always been a question for long time observers of Intel. . . will they have the sticktoitiveness to hang around and make rev 2 and rev 3 better, and in timely fashion.

    There's a mountain to climb there, no question, and they can't climb it all at once. Everything we know about graphics history of the last 12 years or so tells us that conclusively.
     
  16. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    Wouldn't a successor have to begin development long before the precursor was released, and would they have been able to identify weaknesses in time to be fixed?
     
  17. NocturnDragon

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    393
    Likes Received:
    17
    (Bold mine)
    Now the question is.. what are they comparing Larrabee to? G80? G92? future theorical performance extrapolated from current one, taking in account the ETA for Larrabee?
     
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Larrabee has a few things going for it that I feel should be sufficient to carry it through a revision or two.

    CPU designers admit that the utility of symmetric multicore drops to near zero after 4 standard cores, outside of certain lucrative but still limited market segments.

    Intel admits there is a need for different methods to further the usability of massively multicore architectures.
    In that respect, the work put into Larrabee is work Intel needs to do anyway.

    Larrabee's core design may also share the same design philosophy as Intel's Silverthorne core, and in-order 2-wide x86 (besides the vector unit). The core itself is simple enough that design costs are small, with the big difference being in the cache and uncore: areas Intel needs to improve anyway.

    I think that means that a fair amount of Larrabee's expense is incremental to things Intel is doing anyway. Better, it's an attempt to get revenue on work that would otherwise not bear fruit for many more years.


    The possible upside?
    Larrabee's very likely to be a very strong competitor for GPGPU. In addition, it will compete in HPC in a number of areas that AMD's Bulldozer should have gone in (according to early slides showing it's prowess in HPC).

    With one design effort, Intel furthers massive multicore design, hurts the revenues of GPGPU, makes some money in HPC and possibly graphics, and sucker-punches AMD's next-gen design effort on two fronts (Three, if we discover Silverthorne is distantly related to it and hurts Bobcat. Via gets nailed too).

    Even if Larrabee fails in consumer graphics, it should be of interest to HPC. Even if it fails there, the other benefits would be enough that Intel could swallow a few weak iterations and still do well as a whole.

    This might be an Itanium-type situation. As poorly as it did early on, the product is now profitable on an operations basis and it helped kill off several wobbly RISC lines in the high-end. POWER and SPARC are primarily the remaining RISCs that still exist in non-embedded or telecom. SPARC isn't growing and its new products are in a niche (one that Larrabee could also target...).
    IBM is working seriously hard to maintain a leap-frog relationship with a design that is far more intensive on all levels of the system than Intel's.

    The product should be taped out long before the successor's design is frozen.
    They'll have some good ideas from running engineering silicon where some improvements could be made.
    Larrabee II should also benefit from the software and driver snafus that might pop up for the design that first wades into the real world.
     
  19. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    IMHO it wouldn't be such a good idea, rasterization per se doesn't map well to CPUs.
    Though I wouldn't be surprised if Larrabee has a hw rasterizer but not a dedicated setup unit, as it can be implemented in software (does Larrabee support double precision math? ...)
     
  20. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Larrabee should support x86, and some slides show it as a system processor, which hints at full support.

    Slides on Larrabee indicate it should be capable of 8-16 DP operations per clock using SSE. I'm not sure why there's a range, perhaps it hadn't been decided when the slides were created or there's a different throughput depending on whether the code uses standard SSE or the expanded vector set.
    Aside from that, it was stated to support 2 DP non SSE ops.

    This extra support does point to the greatest internal threat to Larrabee: the everything and the kitchen sink syndrome that hit the IA64 Merced.

    If McKinley is an indicator however, Larrabee II should come about after the "we can do anything" phase ends and designers can focus on what it can do well. This is where things go from pie-in-the-sky to interesting.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...