Intel discrete graphics chips confirmed

Discussion in 'Beyond3D News' started by Tim Murray, Jan 22, 2007.

  1. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Errr, WTF?!
    You DO realize that 45nm has been announced by Intel for H2 2007 pretty much forever, and that it has been known for a very long time now that they'd use high-k/metal gates for that node? And you do realize that IBM is also going to have high-k for 45nm in 2008, right?
    I don't even know what to think... That's ridiculous. As for ultra-low-k, I'd suggest you to read this: http://www.fabtech.org/content/view/2273/73/


    Uttar
     
  2. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    82
    I'm quoting from the article, which is why I said it might be hype ie, it's not that big of a deal despite the big press conference. Maybe you should read it first and comment on the article, rather than attacking me?
     
  3. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Am I mistaken in assuming this is NOT a quote nor a paraphrase of the article?
    As for commenting on the article, there are plenty of those around today, and I read a few and wrote a news post about it..


    Uttar
     
  4. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    82

    Yes, it appears you are mistaken, but I agree my paraphrase isn't very good. I should have said something like ".45 using this new material". I'd have thought it would have been obvious if you read the article I referenced as that's all about this new gate process:

    If you want to criticise me, at least read the article I linked to first. The commentary there is significantly different and more detailed to (say), what The Inquirer wrote.
     
    #64 Bouncing Zabaglione Bros., Jan 27, 2007
    Last edited by a moderator: Jan 27, 2007
  5. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Alright then, np. I certainly wouldn't have reacted like that if your original post had precised that - I guess it certainly wasn't obvious to me, heh! :)
    And yes, that article is quite nice for the process side of things. Anandtech's article is the most interesting I've read on Penryn specifically, on the other hand. The following link is interesting wrt low-k too, as I said: http://www.fabtech.org/content/view/2273/73/

    And apparently, IBM reacted to the Intel announcement by declaring that they, too, would be using high-k at 45nm: http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=197001065. Remains to be seen if that materializes though since, unlike Intel, they don't quite have samples yet! But it'll be interesting to watch anyway.


    Uttar
     
  6. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    New updates and claims from Intel's VCG
    • Late 2008-09 timeframe
    • Multi-core architecture (as many as 16 graphics cores packed into a single die)
    • Probably 32nm (vr-zone)
    • 16x performance of G80
     
    #66 Arty, Feb 7, 2007
    Last edited by a moderator: Feb 7, 2007
  7. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Let's see, 16x the performance of G80 with up to 16 cores per die. G80 is at 90nm at 1 core per die.
    90->65->45->32, which leads with perfect scaling would be 8 G80s in about the same space.

    Intel apparently hopes to get cores that are 1/2 the size of G80 at 32nm but with twice the (graphics?) performance per-core.

    If Intel's idea of a multicore GPU is in the same vein as SMP CPU, then I wonder what they have up their sleeve for a 2009 chip. The time frame is slim by CPU standards, but who knows how long they've been doing design work prior to this.
     
  8. kyetech

    Regular

    Joined:
    Sep 10, 2004
    Messages:
    532
    Likes Received:
    0

    Thats right, so its 8x transistors with 2x the frequency. yeilding a 5 billion transistor package at 1.1 ~ 1.2 Ghtz speed. Crazy for late 2009. didnt expect that spec until 2011. Like I said in another post.
     
  9. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    We'll have to wait and see.

    Nobody's processes have shown ideal scaling, and the claims are too vague to be sure just what part of G80's performance profile is doubled.

    If the ALU density per mm^2 is about the same as G80, and clock speed is doubled (shader cores already being doubled-clocked in G80) the power output could be pretty high.

    Then there's the issue with getting 16x bandwidth for bandwidth-limited cases.
     
    #69 3dilettante, Feb 8, 2007
    Last edited by a moderator: Feb 8, 2007
  10. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,418
    Likes Received:
    178
    Location:
    Chania
    16x the performance of G80 sounds way too ambitious even for my wildest imagination (sic); ie I believe it when I see it.

    As much as the timeframe here's a weird "coincidence":

    [​IMG]

    http://www.tgdaily.com/2007/01/25/ces2007_imagination_sgx/

    Hmmmmmm :roll:
     
    Geo likes this.
  11. Techno+

    Regular

    Joined:
    Sep 22, 2006
    Messages:
    284
    Likes Received:
    4
  12. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Not too cheerful about NVIDIA's future, is he? :lol:
     
  13. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4

    I'm a little curious about those vector instructions that Charlie is talking about. Many Intel engineers might describe MMX as "the functionality of a GPU", but were they designed to help with the non-shader portions of the graphics pipeline?


    On the bright side I think if Nick multithreaded his renderer it should absolutely fly on that thing.
     
  14. Carl B

    Carl B Friends call me xbd
    Moderator Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    I found the idea of short-pipelined, multithreaded, x86-extensible, in-order cores pretty interesting. One certainly sees where if it comes to fruition as planned, Intel may achieve a sort of coup de grace in the graphics market. I also can't help but notice the conceptual similarities to Cell, but then again we've known that Intel would be going in this direction for some time now.

    Anyway I think it will be interesting to see what happens - Intel has a big ace in its engineering talent, but their goal is a moving target. I wonder what gets discussed in the war room over at NVidia? Is their posture right now the lead-up to a holding action? Or do they have their own 'master plan' in play behind the scenes that capitalizes on rumored entry into general computing?
     
  15. Techno+

    Regular

    Joined:
    Sep 22, 2006
    Messages:
    284
    Likes Received:
    4
  16. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,147
    Likes Received:
    472
    Location:
    en.gb.uk
    Then what would do the routing?

    :???:
     
  17. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,147
    Likes Received:
    472
    Location:
    en.gb.uk
    Yes it is intriguing, but as others have said above its far from clear what's the benefit in the compartmentalisation of the cores from each other.
     
  18. Techno+

    Regular

    Joined:
    Sep 22, 2006
    Messages:
    284
    Likes Received:
    4
    nutball, i think u didnt understand me, i meant u can make mini-cpu+mini-gpu+router, happy now? :lol:
     
  19. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Uhm... yawn? Sorry, but if it is exactly as described by The Inq, that's a ridiculously bad design IMO. Four threads per core? That's all they can do to hide latency? Or is that a clever way to say "quad"? Either way, this looks absolutely laughable.

    Either that project isn't aimed at the market we are assuming it's for (I'm sure Pixar would love this kind of chip!), or there is more to it than that. If you get rid of much of the fixed-function functionality, you'll need tons of performance to compensate. And I'm more than just a little bit skeptical at the notion of 4TFlops+ on a x86 chip, with "only" 45nm.

    Now, add dedicated functionality for interpolation, texturing and depth operations, and you might begin having something interesting. I still don't see the point though. I'm very skeptical they could reach the per-transistor efficiency of a GPU of that era even then.

    At best, they'll have an interesting solution for the GPGPU market. Of course, you could argue games in general are moving towards being more "GPGPU-like", but that's just NOT going to be the case yet in 2009, even for AAA titles.

    I'm hoping there is something I'm missing here or, more likely, that The Inq is missing something. This could turn out to be very interesting, but as it is presented there, I'm not sure if I'm supposed to laugh, cry, or just yawn. Is it a cool concept? Probably. Is it really perfectly apt for any market whatsoever as it is described there? Probably not. All IMO, ofc...
     
  20. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    I actually find the concept of a short-pipelined x86 processor kind of odd. What is short-pipelined in comparison to; Coppermine, Conroe, Northwod, Prescott? How will this impact clock speed?

    Overall I'm actually quite unimpressed, it sounds like a lot like an x86 Niagara2 to me. In-order, multiple-threads per core, 16 cores. Depending on how the clock speed turns out it might be just competitive with Niagara2, performance wise, and come out several years later. That's not to say it won't have awesome market penetration in the same server market that Sun is gunning for, after all I would expect it to be significantly cheaper than any Sun system and being x86 helps it a lot there as well.


    Just keep in mind how well Cell performs compares to a real GPU when doing graphics.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...