Intel discrete graphics chips confirmed

Discussion in 'Beyond3D News' started by Tim Murray, Jan 22, 2007.

  1. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    And why can't they?

    I haven't seen anything that would lead me to believe it won't be pretty good at more generic parallel work, at the very least competitive with the likes of Niagara2 and Cell, or the derivatives available at the time.
     
  2. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,154
    Likes Received:
    483
    Location:
    en.gb.uk
    From what I've seen so far I'm to agree. It looks very much like a solution looking for a problem.
     
  3. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Niagara2, even if it has some extra FP, is still not FP-focused; so I'd suspect Larrabee would be more aimed towards CELL, indeed...
    Using IP as a backup plan, rather than an in-house design project, would certainly help though, imo. Please note that I am unaware of Intel's projects in that area, I'm merely speculating here... :)
     
  4. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,420
    Likes Received:
    179
    Location:
    Chania
    Wasn't it the very same Intel that "wanted" in the past to address the PDA/mobile market with "just" MMX? The 2700G was rather a switch of 180 degrees compared to the supposed original plans.

    Scratch that all....if we know something then it's that large semiconductor manufacturers are extremely anal about revealing ANY details about future projects. Why now all of the sudden the biggest of them all would inform various trash tabloid sites with as many details THAT early is clearly beyond me.

    I'm not saying or implying anything, but see the paradigm in the first paragraph and my simple question is: was Intel back then truly as dumb to think they could compete relying exclusively on a FPU-less CPU or was the chain of supposed information between the manufacturer and tabloid sites broken on purpose somewhere?
     
  5. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Hmm. Alright, I'll amend that slightly. . . they can't afford to fail unless you assume that AMD will *also* fail with Fusion.
     
  6. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    And I should probably clarify what I was getting at as well. I think that this project can fail as a performance GPU, after all that's the state Intel has been in for how long now? And continuing to be in that position likely won't hurt their business much, though I think compatibility problems could, so they had better get to work on those drivers.

    On the other hand from the little we know, this design will if nothing else hold it's own against similar CPU competitors, at which point x86 and Intel's volume will give them a large leg up. Now the billion dollar question is can AMD, Nvidia, Sony, IBM, etc... compete in this arena? Then as massively parallel computing becomes more common, another question comes into play, how does this get exposed to developers? To me the best approach for everyone involved (except potentially Intel) is for some sort of shader like parallel computing API that makes ISA a non-issue.

    So are you looking at AMD's fusion to be CPU/GPU fusion or ILP/TLP fusion? The first will allow them to stay competitive on the desktop, the second will allow them to stay competitive everywhere else.
     
  7. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,154
    Likes Received:
    483
    Location:
    en.gb.uk
    More from Charlie...

    http://theinquirer.net/default.aspx?article=37572

    Stacked memory? Could be interesting from a bandwidth point of view.

    EDIT: Though I guess I'm presuming that "Polaris" and Larabee are related in some way, which may not be the case I suppose.
     
    #107 nutball, Feb 11, 2007
    Last edited by a moderator: Feb 11, 2007
  8. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Stacked memory would definitely recreate the premium CPU (factor of ten or more than volume desktop pricing, where today's extreme editions only do something like double/triple) market for a few process nodes.

    Perhaps that's part of the point, commoditization hurt margins.

    Larrabee may have evolved independently of Polaris, but there's no conceptual obstacle to porting some Polaris features over. In my opinion, a multi-threaded Polaris would be more compelling than Larabee (as far as what little info we have shows).

    Cooling and mounting would be interesting engineering feats, though.

    With respect to Larrabee as a discrete solution or Fusion as socketed CPU+GPU:

    How about a third-way?

    The vector cores in Larabee get limited exposure as a discrete product, perhaps as some kind of GPGPU product.

    With AMD's Fusion, the adjunct GPU finds use as an integrated laptop solution.

    After a while, cluster of 8 or so mini-cores shows up as adjunct processors on a Core4 chip, and later non-low end Fusion derivatives have GPU multi-cores that sharply deemphasize the non-geometry portions of the GPU functionality.

    Both Larabee and Fusion in-socket have bandwidth issues, so cutting out a large portion of a GPU's workload would help. The geometry portion would have the most use on both CPUs, where physics and other fun things can be done on the minicore portions as well.

    This is also flexible, and may lend itself well to the hybrid software raytracer/GPU approaches that have been discussed.

    Nvidia might have an interest in encouraging this outcome, either by lobbying API makers or influencing software makers, since it means it keeps a lot of its mid and high-end business.
    With luck, the rest of the work not taken by the socketed chip would once again wind up on a separate discrete graphics chip that Nvidia can sell.

    The critical factor is the software side.
    For AMD, I'd think it best to run to Microsoft and the OpenGL ARB and try to get a lock on an interface to the graphics APIs ASAP, before Intel brings its silicon muscle to bear.

    I'd say the best time to start this would be yesterday or sooner, much like AMD managed to get its x86-64 variant blessed by Microsoft before Intel wanted to transition. It's mostly PR and mindshare, but keeping from being declared irrelevant is very important for AMD.

    By the point Larrabee make a showing, it is likely that Intel and Nvidia IGP discounts and a potential Intel GPU/CPU offering will be primed to marginalize whatever gains AMD realizes with Fusion.

    Problems: Intel's not known for stellar graphics drivers, and AMD's made public statements of what I'd categorize as puzzlement with regards to what they will do with Fusion's APIs.
     
    #108 3dilettante, Feb 12, 2007
    Last edited by a moderator: Feb 12, 2007
  9. ROG27

    Regular

    Joined:
    Oct 27, 2005
    Messages:
    572
    Likes Received:
    4
    Ugh...I thought console ******s were bad. That guy Charlie from The Inquirer is flapping around in his own sensationalist drivel. He might as well make love to Intel while he writes these "articles."
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...