Intel discrete graphics chips confirmed

Discussion in 'Beyond3D News' started by Tim Murray, Jan 22, 2007.

  1. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    <a href="www.beyond3d.com"><img src=" http://www.beyond3d.com/images/75x75/logo.png" align="right" width="75" height="75" style="border:none" /></a>Six weeks ago, job descriptions for Intel's Larrabee Development Group caused <a href="http://www.beyond3d.com/forum/showthread.php?t=36464" target="_b3dnew">widespread speculation</a> that the chip giant would be re-entering the discrete graphics marketplace. These speculations have now been confirmed with Intel <a href="http://www.intel.com/jobs/careers/visualcomputing/" target="_b3dnew">actively recruiting</a> for its Visual Computing Group, the official name for the Larrabee project. The page notes that the group will first focus on "developing discrete graphics products based on a many-core architecture targeting high-end client platforms," with lower-end products, including graphics products integrated with CPUs, to follow later. Precisely what this means--what exactly is a many-core architecture in this context, and what's the timeframe for the new chips?--is still anyone's guess.

    One thing to keep in mind is that Intel's fab utilization <a href="http://www.fabtech.org/content/view/2371/73/" target="_b3dnew">has decreased</a> over the past year. Intel would certainly like to use its spare capacity for something, rather than waste money by idling fabs. Considering that it is initially targeting the high-end segment (with its historically higher margins), it would thus seem likely that any future GPUs from Intel would be manufactured in-house, just as the i740 was in days of yore. It will also be interesting to see what, if any, advantage its superior process technologies will give them.

    Therefore, after months (if not years!) of rumors, a milestone has been reached. Intel now has, in their own words on their own site, declared, "Look out, NVIDIA and AMD/ATI, here we come!"

    <font size=1>[hello slashdot, welcome to <a href="http://www.beyond3d.com">beyond3d</a>!]</font>
     
  2. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Whee! Without trumpets or fanfare, while we all are running around trying to figure out what they are up to, they slap the entire strategy down on the table in one neat little para.

    High-end Discrete -->mobile/server/embedded -->Fusion-like.

    The question in my mind is how does "many-core" work into that equation? They clearly are saying it does, but its not very clear to me exactly how yet.

    They appear to be copy-catting the old ATI/NV playbooks; go for the top end first, spend the R&D there, and then leverage the tech down stream.
     
  3. Cypher

    Newcomer

    Joined:
    Jun 28, 2005
    Messages:
    85
    Likes Received:
    1
    The fact that they're targeting high-end platforms is nice to hear considering how abysmal Intel's graphics chips have been. Also, it'll be interesting what a menage a troix does for the industry (note that I'm a bit new to the real-time graphics scene relative to many of the other guys here, and was oblivious to the graphics industry until after it was basically just NV and ATi on the shelves).
     
  4. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Interestingly, I'm not convinced that strategy is for their entire graphics division; that is, I wonder if this might not be a longer-term project (1H 2009 or something?) and that Intel plans to "test the waters" with an independent low-end/mid-end project in 1H 2008 or so. So, those parts would be derivatives the Bearlake IGP architecture (which is a refresh of the X3000's architecture) - but with a lot more bandwidth and pipelines, obviously! And then, later on, you'd see them unrolling an all-new architecture focusing on a broader set of market segments.

    Interesting times ahead anyhow, though, defnitely! I ponder if Intel would be ready to use their 45nm process for GPUs as early as H1 2008 or so. That could be quite a competitive advantage! I feel that in such a timeframe, it's more likely they go with 65nm to make sure they aren't wasting production capacity at that node, though.


    Uttar
     
  5. Windfire

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    353
    Likes Received:
    1
    Location:
    Seattle, WA
    My gut reaction is that this puts Nvidia in a potentially dangerous spot. It largely depends on Intel's full vision of course. At worst, it could put Nvidia in the unenviable position of being aligned to neither Intel or AMD and competing with both as an outsider where integration becomes more and more important.

    It will be interesting.
     
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,118
    Likes Received:
    2,860
    Location:
    Well within 3d
    It's also probably bad news for AMD. If current trends are a guide, even with AMD's head start, Intel's going to release a product first.

    That is, unless AMD can manage to start executing well.
     
  7. Bumpyride

    Newcomer

    Joined:
    May 19, 2005
    Messages:
    54
    Likes Received:
    0
    Location:
    MS, USA
    I was thinking the same regarding nvidia. The 3d industry seems to be integrating and moving towards a general 'high throughput' (as intel put it) computing model. I find myself wondering if nvidia doesn't have an ace in the hole with IBM, though. Not that they have a history of collaboration (aside from the ps3), but it seems everyone's partnering up and that may be beneficial to both IBM and nvidia.
     
  8. icecold1983

    Banned

    Joined:
    Aug 4, 2006
    Messages:
    649
    Likes Received:
    4
    im wondering if all the money intel has to spend on r/d and their extremely small manufacturing process are more important than the experience ati/nvidia have in developing 3d chips.
     
  9. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    I think its an inevitable development of hitting the GHz wall. They then moved to # of cores, and even that has limits in how useful they can be if they are all doing CPU work. So what cries out for parellelization? Let me see, seem I've heard something about that before. . .

    But if you're going to get into graphics for real you probably have to make the full investment, top-down RD.
     
  10. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    Intel may have some process advantages, but at least until GPUs and CPUs are completely overlapping, they would have to make massive capacity expansion in order fab cpus and gpus in any sort of volume at leading edge proccesses. That is unless GPUs are a small volume niche. But if they want mass markets are are going to fab in house - large capacity exapnsion. So I'm not so sure they will take that route. Certainly an interesting question.

    This business will not be easy, even for Intel, to make money in. I'd say that ATI/AMD is pretty good in the engineering department. Even with relatively large market share they are not and have not been particularly profitable, with the exception of when NV was really down. In fact, NV may now have 50% higher gross margins than ATI/AMD. NVIDIA is a very efficient company.
     
  11. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Historically, the joke with Intel is if they can't make tons of money at it in three years or so, they lose interest and fade away. Or they cannablize good IP into "just barely good enough" downrange products for years past its "sell by" date (think i740).

    I'm not so sure they can afford to do that this time tho --either model. This time it might be a whole lot closer to their core business competitiveness.
     
  12. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    And I think things are slightly different now too with GPUs being so programmable. They may be able to use the same chip in multiple applications. So in the desktop a GPU and in the server an x86 throughput CPU to stop the likes of Sun from making anymore headway.
     
  13. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    Absolutely things are changing and Intel is getting religious it appears. But so has Microsoft with the web, beginning with MSN and with search over a half decade late.
     
  14. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,118
    Likes Received:
    2,860
    Location:
    Well within 3d
    Don't see an x86 GPU ever happening.

    What this does do is pretty much screw AMD.
    All Intel needs to do is maintain its lead in CPU performance (this looks to be good for the entire life span of K8L), maintain its process lead (which has always been the case and is likely to happen again), maintain its lead in number of cores (AMD has ceded this), and use its GPU line to marginalize Fusion (doesn't need to be massively better, just enough to kill AMD's margins) and allow Nvidia to hammer at the graphics portions in the descrete and high end (given the string of false starts, Nvidia will be positioned to match the ATI portion of AMD for quite some time).

    There's probably not a huge winner in this, but I can think of one loser.
     
  15. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    You haven't been looking at the Intel job postings then :) head over to

    http://www.intel.com/jobs/jobsearch/index_adv.htm

    and look up job # 523837

    "TL x86 Validation - Larrabee"

    At which point you probably also want to look at job # 522501

    "Graphics System Architect"

     
    #15 Killer-Kris, Jan 22, 2007
    Last edited by a moderator: Jan 22, 2007
  16. DeanoC

    DeanoC Trust me, I'm a renderer person!
    Veteran Subscriber

    Joined:
    Feb 6, 2003
    Messages:
    1,469
    Likes Received:
    185
    Location:
    Viking lands
    Well if its anything like i740 it won't be on the cutting edge process, it will be the one behind...

    Intel used to claim that i740 cost them almost nothing (I believe they once said $5 for parts and power etc. and they could afford to ship a card with a game), as it was using factories that they had no use for so was effectively free.

    No idea if true but its worth noting that extrapolating a multiple chip (tho not THERE cutting edge process) strategy actually makes a lot of sense for Intel unlike alot of other companies...
     
  17. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,118
    Likes Received:
    2,860
    Location:
    Well within 3d
    There would need to be a group of people who would validate that the GPU design works properly within an x86 system. Just because some people work with x86 stuff doesn't mean they're going to make the GPU run it.

    I'd need a little more than the fact that two job positions are in the same town.
     
  18. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    You're right, there is not a whole lot of verifiable/credible public information stating it executes x86 but I certainly wouldn't count it out.
     
  19. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    Does x86 even really matter in terms of the future of operating systems and programming? For one thing, doesn't DX 10 couple the GPU to the operating system?

    Might there not also be virtualization techniques that the parallel processing of GPUs are particularly good at handling?

    I am technically limited, but there are some trends that make me wonder and perhaps there are people on these boards who have some insights. Or maybe Charlie from The Inq can come by and say some words.
     
  20. iwod

    Newcomer

    Joined:
    Jun 3, 2004
    Messages:
    179
    Likes Received:
    1
    How hard is it to scale a GPU? That is the question.

    i dont think it is like snapping a few together and it is done job. But it should be much easier then designing something from Ground up right?

    X3000 was a new start for graphics in Intel. Then the upcoming G30/G35 will be another update to it. So intel aren't exactly a new comer to graphics.

    So let say the Intel Discrete GFX is basically a G35 with 10 times the Shader.. pipes.. etc. That should make it have some respectable performance right? ( I am not saying it will have 10 times the performance .. but respectable )

    And Since intel already have lots of people doing the drivers. ( Although pretty poor right now ) And a lot of games are programmed with the lowest common factor ( intel integrated gfx ). With there always a class above fab tech and huge capacity. Does this sounds too good to me?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...