NV to leave IBM?

Discussion in 'Architecture and Products' started by Tim Murray, Apr 20, 2004.

  1. Nappe1

    Nappe1 lp0 On Fire!
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,532
    Likes Received:
    11
    Location:
    South east finland
    FUD YOU ALL! :lol:

    umh... it's Inq, since when it has been reliable enough any otherwise than giving so many choices than one of them must be correct?

    so, why are we having this conversation?

    besides, if this turns out to be correct, do you really believe that nVidia or IBM admits this? (nVidia needs those who wants to believe NV40 to be faster than R420, if it leaves from IBM. it needs them because it's only way try to overtake the queue at TSMC. and IBM won't be stating on phone conference that they lost nVidia as partner, but more like shouting about getting someone smaller on it's place.)

    about a half a year ago, someone asked in private message that how I see the situation between NV40 and R420... Back then I stated that I believe companies starting to hit process development ceiling, what comes to complexity of chips. I initially thought it would happen on NV50 / R500 generation, but seems that companies are already in this point. ATI seems not implementing PS 3.0 on R420 and having "only 180 million transistors" and news about yeild issues from IBM does not sound really promising on nVidia's 220 million transistor baby. But IMO, the problems are just starting here. Imagine situation where neither company would not be getting their usual 50 % add to transistor counts on every year. How to keep up the Hype, that seems to be keeping up over 50% of business up and running?

    I don't post often anymore (not much to say really.) but hopefully it's more quality over quantity then.
     
  2. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Micron makes RAM chips, full of highly repetitive cells. Of course they do manual layout for the cells--its critical that each ram cell is as small as possible.

    Every body else (excepting Intel, from what I hear), uses standard cells and automated layout for the majority of their chips. You cannot make a chip with millions of gates and do manual layout. It just isn't possible.

    Size, and the cell library are probably the two biggest factors in yield--assuming similar standards of engineering between two projects.
     
  3. ZenThought

    Newcomer

    Joined:
    Apr 12, 2004
    Messages:
    23
    Likes Received:
    0
    Your lack of semiconductor industry is apparent.

    Industry has well established roadmap (tools,method,etc) to
    perhaps 15nm. 90nm is at production level (intel and IBM is
    well ahead, rest of industry is behind little). 65nm is emerging from
    prototyping stage to production around 2005.

    There are no roadblock to 1bill transistor by 2008-2009(assuming
    2x scaling every 2 years).

    There is no yield issue respect to NV40. There is some yield issues
    respect to 90nm production at IBM. In fact Nvidia says NV40
    should yield much better than NV30 due to learning curve and better
    methodology.
     
  4. {Sniping}Waste

    Regular

    Joined:
    Jan 13, 2003
    Messages:
    833
    Likes Received:
    29
    Location:
    Garland TX
    Yes it is possibe. All this info comes from Micron enginers. The friend that works for Micron in the IC layout area has been laying out for many years back in the days of TI min man program. My father did the same thing in the min man program and with a hand ful of others wrote the book on how a FAB is build and run and is still used today. Semi conducter manufacturing is my area and the info I give is FACT.

    As for cell size, smaller is better but not all the time. Leaveing room can help in many area but even with that its still much smaller then one done by auto routing. By auto routing the whole IC, it will be about 4 to 5 time the size. :shock:
    One trick my friend does is layout the IC liner. Save room and alows mod s later with little work.
     
  5. binmaze

    Newcomer

    Joined:
    Feb 12, 2003
    Messages:
    88
    Likes Received:
    0
    When NV30 was released, at last, nvidia said that the main cause of the delay was FP16+32 setup, rather than 130nm process.
    And some time after, TSMC said that there should be no more of such a case as NV30, in the future, by any company. They seemed not at all pleased.

    Thus, I think it's due to the design itself, rather than the process.
     
  6. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    No, they didn't.

    The NV30 was originally designed to operate on a low-k .13 micron process. That process was not available in time, and nVidia had to switch processes. This meant that nVidia had to do a lot of redesigning.
     
  7. binmaze

    Newcomer

    Joined:
    Feb 12, 2003
    Messages:
    88
    Likes Received:
    0
    I clearly remember that they said that. Though I cannot provide the link, since too much time has passed.

    Edit) It was a kind of interview, IIRC.
    And it happened almost near the release. At that time, few knew FP16+32 design could be this problematic.
    Though people thought, "Well, a complex chip, maybe," but mainly, people thought nvidia was evading the blame for the mistake of choosing 130nm process prematurely.
     
  8. Stryyder

    Regular

    Joined:
    Apr 9, 2004
    Messages:
    334
    Likes Received:
    0
    I didn't think NV30 was designed for low K. Nvidia did put money into the project with TMSC to help develop low K but I thought it was for a future chip design.
     
  9. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Hopefully you don't think its rude when I say you sound like a middle school kid who's picked up some vernacular and likes to talk smack on boards.

    Because, ummm, you do.
     
  10. kemosabe

    Veteran

    Joined:
    Jun 19, 2003
    Messages:
    1,001
    Likes Received:
    16
    Location:
    Montreal, Canada
  11. {Sniping}Waste

    Regular

    Joined:
    Jan 13, 2003
    Messages:
    833
    Likes Received:
    29
    Location:
    Garland TX
    Some might say the same to you.

    Im 27 and have a year of semi conducter traning plus friends in the industry like Micron, TI, and a family member at HP in the Itanim project.
     
  12. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Find it and I'll believe you. Right now, I don't. If you can remember a name or specific phrase, just use google, but I think you'll find that you misread or misremembered it.
     
  13. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Sniping, not too many would say the same about him here.

    It doesnt matter if you have a good point ... the way you are trying to make it makes it impossible for anyone to take you seriously.
     
  14. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    3,264
    Likes Received:
    813
    Have they said this?
    It would seem to be further evidence of bad design/engineering on nv30 rather than process issues (at least the way I read it)
     
  15. {Sniping}Waste

    Regular

    Joined:
    Jan 13, 2003
    Messages:
    833
    Likes Received:
    29
    Location:
    Garland TX
    Fine then, Ill keep the insider info to my self then.
    I know the problems with Strained silicon( more then just germanium and silicon) but will keep that to my self.
     
  16. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Ooooh! Burn!

    If only I worked longer in the semiconductor business, or was older than you, or had more friends in the industry at companies like NVIDIA, Cirrus, TI, ARM, Synopsys, or even the company I work at.

    Then it wouldn't matter that my brother was only a roadie for Clint Black.

    Oh wait, I have, I am, and I do. And he was.

    Not that half of that is important.
     
  17. binmaze

    Newcomer

    Joined:
    Feb 12, 2003
    Messages:
    88
    Likes Received:
    0
    I couldn't find the very article I was refering to. But here is the close one:
    Link
     
  18. asicnewbie

    Newcomer

    Joined:
    Jun 29, 2002
    Messages:
    116
    Likes Received:
    3
    'common guys, let's try to keep it civil. There are few people in the semiconductor industry who can claim "I know it all." I'm certainly not one of them, and I very seriously doubt anyone on this board is. BUT... and this important, many posters *do* work either in the industry or a tertiary related industry, and *CAN* make meaningful technical commentaries from time to time.

    {Sniping}Waste, I believe your assertion "'full-custom layout' is possible with modern (multi-million gate) designs." But the percentage of IC-designs utilizing that methodology is very small. As Russ pointed out, the industry as a whole relies on more conventional(and less man-labor intensive) 'automated place&route.'

    Your post sounded as if full-custom layout was commonplace, from the very simple digital ICs all the way to the very complex (like the NV40/R420.) It isn't. I won't comment on analog and mixed-signal ICs, as these parts tend to have a good portion of custom-logic. ('Design-reuse' strategies for the analog-world aren't yet as practical as digital-design reuse.) But for digital ASICs, there are plenty of reasons full-custom design is such a rarity.

    First of all, it's much more labor-intensive than traditional standard-cell (gate-level) design. 2nd, it's rarely necessary; if a design team discovers their standard-cell layout can't reach timing-closure at 180nm, their first choice is to retarget to 150nm (or 130nm) -- not switch over to full-custom! And 3rd, full-custom is risky -- it assumes complete trust in the foundry's process characterization data and physical design kit. Some foundries don't offer detailed PDKs, and discourage the practice altogether.

    As for Micron and Intel, both companies sell ICs which require specialized-IC design practices. Performance (GHz clock) CPUs deal in operating-frequencies (>2GHz) well beyond any 0.13/90nm standard-cell library, existing or planned. (And the exotic-nature of the clock-distribution network are first-order factors on layout/floorplanning considerations.) Conventional (discrete-component) DRAM can't be fabbed on ordinary 'logic' process-lines, so in a sense, DRAM is already special. Well perhaps it could. if the designer traded density in exchange for reducing additional masking steps. And finally, Micron and Intel fab their designs in-house, where they have *complete* control over the manufacturing process.

    Zenthought, as far as I can tell, the 'roadmap' you speak of (ITRS?) defines process parameters, materials, etc. -- but it doesn't address the 'human design' problem -- i.e., how does a design-team use 2.5X more gates in a logical fashion? I suppose that's something each individual design-team and its management must figure out.

    But more importantly, the recent trend has shown that ever-shrinking processes are breaking old tools due to poor correlation between the tool's circuit-model and reality. For a long time, you could just use the same circuit-timing engine, reload it with new data from 0.65u, 0.35u, 0.25u, 0.18u, etc, and expect it to crank out reliable information. Then, designers took their 180nm design-tools, loaded 130nm libraries, compiled, and expected them to simply work. They didn't...I guess the analogy is like the 1st-year physics student applying Newtonian mechanics (F=ma) to sub-atomic (quantum) quantities, and wondering why his formulas don't correlate with observation. This has been addressed over time, but it highlights the lag between the 'bleeding-edge' of process-nodes and their general usability (at the IC-designer's console.)

    For example, Synopsys and Cadence market "90nm ready" logic-synthesis tools; special features like "dual Vt library support" (to better address gate-leakage) and "crosstalk avoidance" (for signal-integrity.) (Alright, I admit some of those are marketing-buzzwords.) The new features are specifically to address 90nm/130nm design issues (ones the old tools didn't adequately predict.) The new features aren't just productivity boosters; they are baseline features to bring the synthesis-tools to a *usable state* at 90nm. Will there be further problems at 65nm? Who knows ...
     
  19. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    1. I really don't see nVidia publicly stating something that would damage their relationship with TSMC. They depend on TSMC pretty heavily (but you'll note that it was around that time that they started to move to IBM).
    2. He didn't rule out other factors (i.e. fab problems at TSMC).
    3. He did not say anything about 16-bit FP.

    I still believe that what happened was when nVidia first decided on the process for the NV30, TSMC had an optimistic outlook for the possibilities of their low-k .13 micron process. As the NV30 neared launch, it became apparent that this process would not be available on time. So, lots of extra work had to be done to get the NV30 operational on a normal .13 micron process, which reduced the performance of the final product.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...