NV to leave IBM?

FUD YOU ALL! :LOL:

umh... it's Inq, since when it has been reliable enough any otherwise than giving so many choices than one of them must be correct?

so, why are we having this conversation?

besides, if this turns out to be correct, do you really believe that nVidia or IBM admits this? (nVidia needs those who wants to believe NV40 to be faster than R420, if it leaves from IBM. it needs them because it's only way try to overtake the queue at TSMC. and IBM won't be stating on phone conference that they lost nVidia as partner, but more like shouting about getting someone smaller on it's place.)

about a half a year ago, someone asked in private message that how I see the situation between NV40 and R420... Back then I stated that I believe companies starting to hit process development ceiling, what comes to complexity of chips. I initially thought it would happen on NV50 / R500 generation, but seems that companies are already in this point. ATI seems not implementing PS 3.0 on R420 and having "only 180 million transistors" and news about yeild issues from IBM does not sound really promising on nVidia's 220 million transistor baby. But IMO, the problems are just starting here. Imagine situation where neither company would not be getting their usual 50 % add to transistor counts on every year. How to keep up the Hype, that seems to be keeping up over 50% of business up and running?

I don't post often anymore (not much to say really.) but hopefully it's more quality over quantity then.
 
{Sniping}Waste said:
Ill say it again. The IC layout can make good yeilds or bad yeilds. The IC layout is one of the bigest keys in yeild. The big compaies that jumped on to auto routing is paying for it now. Intel is a good exsample. Micron does not alow any auto routing and the yeilds are in the 90% range because of it. It also alow small changes with out having to re-route to wole IC. For the most part Micron IC for the most part work on the first steping for production. The problem for the most part is Nvidia and part IBM. If you have a poor IC layout than the best FAB will still give you yeild problem.
Micron makes RAM chips, full of highly repetitive cells. Of course they do manual layout for the cells--its critical that each ram cell is as small as possible.

Every body else (excepting Intel, from what I hear), uses standard cells and automated layout for the majority of their chips. You cannot make a chip with millions of gates and do manual layout. It just isn't possible.

Size, and the cell library are probably the two biggest factors in yield--assuming similar standards of engineering between two projects.
 
Nappe1 said:
FUD YOU ALL! :LOL:

umh... it's Inq, since when it has been reliable enough any otherwise than giving so many choices than one of them must be correct?

so, why are we having this conversation?

besides, if this turns out to be correct, do you really believe that nVidia or IBM admits this? (nVidia needs those who wants to believe NV40 to be faster than R420, if it leaves from IBM. it needs them because it's only way try to overtake the queue at TSMC. and IBM won't be stating on phone conference that they lost nVidia as partner, but more like shouting about getting someone smaller on it's place.)

about a half a year ago, someone asked in private message that how I see the situation between NV40 and R420... Back then I stated that I believe companies starting to hit process development ceiling, what comes to complexity of chips. I initially thought it would happen on NV50 / R500 generation, but seems that companies are already in this point. ATI seems not implementing PS 3.0 on R420 and having "only 180 million transistors" and news about yeild issues from IBM does not sound really promising on nVidia's 220 million transistor baby. But IMO, the problems are just starting here. Imagine situation where neither company would not be getting their usual 50 % add to transistor counts on every year. How to keep up the Hype, that seems to be keeping up over 50% of business up and running?

I don't post often anymore (not much to say really.) but hopefully it's more quality over quantity then.

Your lack of semiconductor industry is apparent.

Industry has well established roadmap (tools,method,etc) to
perhaps 15nm. 90nm is at production level (intel and IBM is
well ahead, rest of industry is behind little). 65nm is emerging from
prototyping stage to production around 2005.

There are no roadblock to 1bill transistor by 2008-2009(assuming
2x scaling every 2 years).

There is no yield issue respect to NV40. There is some yield issues
respect to 90nm production at IBM. In fact Nvidia says NV40
should yield much better than NV30 due to learning curve and better
methodology.
 
RussSchultz said:
{Sniping}Waste said:
Ill say it again. The IC layout can make good yeilds or bad yeilds. The IC layout is one of the bigest keys in yeild. The big compaies that jumped on to auto routing is paying for it now. Intel is a good exsample. Micron does not alow any auto routing and the yeilds are in the 90% range because of it. It also alow small changes with out having to re-route to wole IC. For the most part Micron IC for the most part work on the first steping for production. The problem for the most part is Nvidia and part IBM. If you have a poor IC layout than the best FAB will still give you yeild problem.
Micron makes RAM chips, full of highly repetitive cells. Of course they do manual layout for the cells--its critical that each ram cell is as small as possible.

Every body else (excepting Intel, from what I hear), uses standard cells and automated layout for the majority of their chips. You cannot make a chip with millions of gates and do manual layout. It just isn't possible.

Size, and the cell library are probably the two biggest factors in yield--assuming similar standards of engineering between two projects.
Yes it is possibe. All this info comes from Micron enginers. The friend that works for Micron in the IC layout area has been laying out for many years back in the days of TI min man program. My father did the same thing in the min man program and with a hand ful of others wrote the book on how a FAB is build and run and is still used today. Semi conducter manufacturing is my area and the info I give is FACT.

As for cell size, smaller is better but not all the time. Leaveing room can help in many area but even with that its still much smaller then one done by auto routing. By auto routing the whole IC, it will be about 4 to 5 time the size. :oops:
One trick my friend does is layout the IC liner. Save room and alows mod s later with little work.
 
When NV30 was released, at last, nvidia said that the main cause of the delay was FP16+32 setup, rather than 130nm process.
And some time after, TSMC said that there should be no more of such a case as NV30, in the future, by any company. They seemed not at all pleased.

Thus, I think it's due to the design itself, rather than the process.
 
binmaze said:
When NV30 was released, at last, nvidia said that the main cause of the delay was FP16+32 setup, rather than 130nm process.
No, they didn't.

The NV30 was originally designed to operate on a low-k .13 micron process. That process was not available in time, and nVidia had to switch processes. This meant that nVidia had to do a lot of redesigning.
 
I clearly remember that they said that. Though I cannot provide the link, since too much time has passed.

Edit) It was a kind of interview, IIRC.
And it happened almost near the release. At that time, few knew FP16+32 design could be this problematic.
Though people thought, "Well, a complex chip, maybe," but mainly, people thought nvidia was evading the blame for the mistake of choosing 130nm process prematurely.
 
Chalnoth said:
binmaze said:
When NV30 was released, at last, nvidia said that the main cause of the delay was FP16+32 setup, rather than 130nm process.
No, they didn't.

The NV30 was originally designed to operate on a low-k .13 micron process. That process was not available in time, and nVidia had to switch processes. This meant that nVidia had to do a lot of redesigning.

I didn't think NV30 was designed for low K. Nvidia did put money into the project with TMSC to help develop low K but I thought it was for a future chip design.
 
{Sniping}Waste said:
Yes it is possibe. All this info comes from Micron enginers. The friend that works for Micron in the IC layout area has been laying out for many years back in the days of TI min man program. My father did the same thing in the min man program and with a hand ful of others wrote the book on how a FAB is build and run and is still used today. Semi conducter manufacturing is my area and the info I give is FACT.

As for cell size, smaller is better but not all the time. Leaveing room can help in many area but even with that its still much smaller then one done by auto routing. By auto routing the whole IC, it will be about 4 to 5 time the size. :oops:
One trick my friend does is layout the IC liner. Save room and alows mod s later with little work.
Hopefully you don't think its rude when I say you sound like a middle school kid who's picked up some vernacular and likes to talk smack on boards.

Because, ummm, you do.
 
RussSchultz said:
{Sniping}Waste said:
Yes it is possibe. All this info comes from Micron enginers. The friend that works for Micron in the IC layout area has been laying out for many years back in the days of TI min man program. My father did the same thing in the min man program and with a hand ful of others wrote the book on how a FAB is build and run and is still used today. Semi conducter manufacturing is my area and the info I give is FACT.

As for cell size, smaller is better but not all the time. Leaveing room can help in many area but even with that its still much smaller then one done by auto routing. By auto routing the whole IC, it will be about 4 to 5 time the size. :oops:
One trick my friend does is layout the IC liner. Save room and alows mod s later with little work.
Hopefully you don't think its rude when I say you sound like a middle school kid who's picked up some vernacular and likes to talk smack on boards.

Because, ummm, you do.

Some might say the same to you.

Im 27 and have a year of semi conducter traning plus friends in the industry like Micron, TI, and a family member at HP in the Itanim project.
 
binmaze said:
I clearly remember that they said that. Though I cannot provide the link, since too much time has passed.
Find it and I'll believe you. Right now, I don't. If you can remember a name or specific phrase, just use google, but I think you'll find that you misread or misremembered it.
 
Sniping, not too many would say the same about him here.

It doesnt matter if you have a good point ... the way you are trying to make it makes it impossible for anyone to take you seriously.
 
Nvidia says NV40
should yield much better than NV30 due to learning curve and better
methodology.
Have they said this?
It would seem to be further evidence of bad design/engineering on nv30 rather than process issues (at least the way I read it)
 
MfA said:
Sniping, not too many would say the same about him here.

It doesnt matter if you have a good point ... the way you are trying to make it makes it impossible for anyone to take you seriously.
Fine then, Ill keep the insider info to my self then.
I know the problems with Strained silicon( more then just germanium and silicon) but will keep that to my self.
 
{Sniping}Waste said:
Im 27 and have a year of semi conducter traning plus friends in the industry like Micron, TI, and a family member at HP in the Itanim project.
Ooooh! Burn!

If only I worked longer in the semiconductor business, or was older than you, or had more friends in the industry at companies like NVIDIA, Cirrus, TI, ARM, Synopsys, or even the company I work at.

Then it wouldn't matter that my brother was only a roadie for Clint Black.

Oh wait, I have, I am, and I do. And he was.

Not that half of that is important.
 
Chalnoth said:
binmaze said:
I clearly remember that they said that. Though I cannot provide the link, since too much time has passed.
Find it and I'll believe you. Right now, I don't. If you can remember a name or specific phrase, just use google, but I think you'll find that you misread or misremembered it.
I couldn't find the very article I was refering to. But here is the close one:
And contrary to the rumors mill that the 0.13 micron manufacturing process delayed taping out the GeForceFX, Kirk blamed implementing these 32 processing units:

One of the reasons that NV30 took us so long is that everything top to bottom is 128-bit floating-point. So for the first time it's possible to make the pictures in hardware that you would make in software, because you don't trash your precision anywhere in the pipe.

Link
 
'common guys, let's try to keep it civil. There are few people in the semiconductor industry who can claim "I know it all." I'm certainly not one of them, and I very seriously doubt anyone on this board is. BUT... and this important, many posters *do* work either in the industry or a tertiary related industry, and *CAN* make meaningful technical commentaries from time to time.

{Sniping}Waste, I believe your assertion "'full-custom layout' is possible with modern (multi-million gate) designs." But the percentage of IC-designs utilizing that methodology is very small. As Russ pointed out, the industry as a whole relies on more conventional(and less man-labor intensive) 'automated place&route.'

Your post sounded as if full-custom layout was commonplace, from the very simple digital ICs all the way to the very complex (like the NV40/R420.) It isn't. I won't comment on analog and mixed-signal ICs, as these parts tend to have a good portion of custom-logic. ('Design-reuse' strategies for the analog-world aren't yet as practical as digital-design reuse.) But for digital ASICs, there are plenty of reasons full-custom design is such a rarity.

First of all, it's much more labor-intensive than traditional standard-cell (gate-level) design. 2nd, it's rarely necessary; if a design team discovers their standard-cell layout can't reach timing-closure at 180nm, their first choice is to retarget to 150nm (or 130nm) -- not switch over to full-custom! And 3rd, full-custom is risky -- it assumes complete trust in the foundry's process characterization data and physical design kit. Some foundries don't offer detailed PDKs, and discourage the practice altogether.

As for Micron and Intel, both companies sell ICs which require specialized-IC design practices. Performance (GHz clock) CPUs deal in operating-frequencies (>2GHz) well beyond any 0.13/90nm standard-cell library, existing or planned. (And the exotic-nature of the clock-distribution network are first-order factors on layout/floorplanning considerations.) Conventional (discrete-component) DRAM can't be fabbed on ordinary 'logic' process-lines, so in a sense, DRAM is already special. Well perhaps it could. if the designer traded density in exchange for reducing additional masking steps. And finally, Micron and Intel fab their designs in-house, where they have *complete* control over the manufacturing process.

Zenthought, as far as I can tell, the 'roadmap' you speak of (ITRS?) defines process parameters, materials, etc. -- but it doesn't address the 'human design' problem -- i.e., how does a design-team use 2.5X more gates in a logical fashion? I suppose that's something each individual design-team and its management must figure out.

But more importantly, the recent trend has shown that ever-shrinking processes are breaking old tools due to poor correlation between the tool's circuit-model and reality. For a long time, you could just use the same circuit-timing engine, reload it with new data from 0.65u, 0.35u, 0.25u, 0.18u, etc, and expect it to crank out reliable information. Then, designers took their 180nm design-tools, loaded 130nm libraries, compiled, and expected them to simply work. They didn't...I guess the analogy is like the 1st-year physics student applying Newtonian mechanics (F=ma) to sub-atomic (quantum) quantities, and wondering why his formulas don't correlate with observation. This has been addressed over time, but it highlights the lag between the 'bleeding-edge' of process-nodes and their general usability (at the IC-designer's console.)

For example, Synopsys and Cadence market "90nm ready" logic-synthesis tools; special features like "dual Vt library support" (to better address gate-leakage) and "crosstalk avoidance" (for signal-integrity.) (Alright, I admit some of those are marketing-buzzwords.) The new features are specifically to address 90nm/130nm design issues (ones the old tools didn't adequately predict.) The new features aren't just productivity boosters; they are baseline features to bring the synthesis-tools to a *usable state* at 90nm. Will there be further problems at 65nm? Who knows ...
 
binmaze said:
I couldn't find the very article I was refering to. But here is the close one:
And contrary to the rumors mill that the 0.13 micron manufacturing process delayed taping out the GeForceFX, Kirk blamed implementing these 32 processing units:

One of the reasons that NV30 took us so long is that everything top to bottom is 128-bit floating-point. So for the first time it's possible to make the pictures in hardware that you would make in software, because you don't trash your precision anywhere in the pipe.

Link
1. I really don't see nVidia publicly stating something that would damage their relationship with TSMC. They depend on TSMC pretty heavily (but you'll note that it was around that time that they started to move to IBM).
2. He didn't rule out other factors (i.e. fab problems at TSMC).
3. He did not say anything about 16-bit FP.

I still believe that what happened was when nVidia first decided on the process for the NV30, TSMC had an optimistic outlook for the possibilities of their low-k .13 micron process. As the NV30 neared launch, it became apparent that this process would not be available on time. So, lots of extra work had to be done to get the NV30 operational on a normal .13 micron process, which reduced the performance of the final product.
 
Back
Top