IBM to produce Nvidia chips

Glonk said:
tazdevl said:
Intel will never farm out excess fab capacity.
They did for a while up until a few months ago...

But if they stopped, it's probably because they seen too many disadvantages/problems to it.
So they're maybe never gonna farm out excess fab capacity again.


Uttar
 
One of Intel's key competitive advantages is it's manufacturing capabilities. It's the basis of it's strategy, to continue to innovate and price competitors out of the market through the implementation of more advanced manufacturing technologies.

There's a great book out on Intel's strategy, I'll post the title when I can track it down.

Farming out production capacity to a competitor, ATI is a competitor, will never happen (yes I'll say never)... especially an advanced process.

To the best of my knowledge, Intel has not farmed out production of its fabs. Only thing I can see happening to change that is if they strapped for cash or there are significant management changes in the organization.

Glonk, assuming they did farm it out, who did they farm it out to and was it an advanced process? Name please.

Back on topic... NV40 = IBM from what I've heard, NV35 = TSMC.
 
Brimstone said:
I think it's nice to see a good fab in America getting some more work. I doubt that Nvidia's motivation has anything to do with a sense of nationalism, which would be a terrible reason anyway. The decision is more of a reflection on high quality and productive workers that the IBM plant can offer Nvidia.

I just want to comment that I don't care whether it is in canada or europe, but it is nice when companies in locations that treat their workers decently/well actually get work. (Plus they tend to destroy the environment less)
 
tazdevl said:
One of Intel's key competitive advantages is it's manufacturing capabilities. It's the basis of it's strategy, to continue to innovate and price competitors out of the market through the implementation of more advanced manufacturing technologies.

There's a great book out on Intel's strategy, I'll post the title when I can track it down.

Farming out production capacity to a competitor, ATI is a competitor, will never happen (yes I'll say never)... especially an advanced process.

To the best of my knowledge, Intel has not farmed out production of its fabs. Only thing I can see happening to change that is if they strapped for cash or there are significant management changes in the organization.

Glonk, assuming they did farm it out, who did they farm it out to and was it an advanced process? Name please.

Back on topic... NV40 = IBM from what I've heard, NV35 = TSMC.

Intel did manufacture PA-RISC and Alpha cores. Of course, strategic decisions and law suits played no small part to these situations.

Also, using slightly older Intel processes might still be more advantageous then being the test subject TSMC's advanced process.
 
Glonk said:
tazdevl said:
Glonk, assuming they did farm it out, who did they farm it out to and was it an advanced process? Name please.
http://www.eetimes.com/story/OEG20010924S0067

They focused on communication's chips.

Hmm, I'm not understanding that article. Readingthe first few paragraphs it sounded like they were outsourcing their design and would use other foundries. But then they also talk about outsourcing their manufacturing. I'm confused.
 
Glonk wrote:

> http://www.eetimes.com/story/OEG20010924S0067
>
> They [Intel] focused on communication's chips.

I have to agree with Deflection. It sounds like Intel is forming an ASIC-services business. Intel develops (or licenses/buys) IP-blocks, then offers them to customers. The customer gets a 'one stop shop' for all their comm ASIC development needs (radio-freq blocks, tuners-blocks, volatile and non-volatile memory blocks, etc.), while Intel manages the integration/layout/verification of the customer's design.

I can think of at least 1 other US-foundry company doing the same. LSI comes to mind. LSI owns and operates their own chip foundries, yet LSI outsources (some) chip production to other foundries (LSI uses TSMC.) Quite simply, LSI cannot compete with TSMC on a price/wafer basis. Someone at LSI made a strategic decision to move their ASIC-business to a 'fabless company' model.

As far as Intel selling its own excess wafer manufacturing capacity to third-parties, which is the issue at hand, IMHO, this seems unlikely. I look to IBM's track-record as an ASIC-foundry. Based on what my coworkers told me, IBM was historically a 'difficult company to work with.' To sum up IBM's attitude, one coworker parodied "We are leaders in process technology, and to be blessed by our superior knowledge, you must do everything our way, and we'll complete YOUR work on OUR schedule." As a technology leader, IBM's core business was its own microprocessor products, and everything else (foundry-business) was of secondary importance. Every dealing an IBM-foundry customer had with IBM hammered home this underlying reality.

I scrutinize IBM's track record, because as companies, IBM and Intel are similar enough to warrant comparison. (Both make CPUs, both research/develop/implement their leading edge process-foundry technologies, and both could potentially sell foundry-service on the side.)

http://www.eetimes.com/story/OEG20020604S0026 -
http://www.eetimes.com/story/OEG20020624S0042 -

These 2 eetimes article talk about IBM's refocus on their ASIC-foundry business -- not much detail, but my interpretation is that IBM will fashion its ASIC business around high-end capability to a selected 'elite.' They have already done so to Nintendo, Xilinx, and NVidia (and a few others.) Average/low-volume customers (and asicnewbies) need not apply!
 
Sure looks to me though like we have another management problem within nVidia--which is probably inflamed by the fact that nVidia has neither the time or the money to go back to the drawing board for nv30 (this is all just IMO, so take it with a grain of salt...;))

Right now the thrust within the company seems to be "damn the torpedos, full speed ahead" in the sense that nVidia is clearly staking its continued success in the market on the capability of the IBM FAB to do what the combined resources of nVidia and TSMC were unable to do with nv30 relative to good yields at .13 microns.

The question seems to be at this stage whether nVidia's nv30 + designs are doable at .13 microns for a commercially viable, commercially competitive product. Prior to the IBM announcement the answer seemed a qualified "no," but the IBM announcement seems to me to do little more than add fuel to the fire of the speculation that there is just something which is *generally* "impossible" (to borrow the nVidia CEO's own words) about nv30 + architectures not only at .15 microns, but also at .13
microns.

In point of fact, we have gobs of objective proof to indicate nVidia has had a lot of trouble with nv30 and .13 microns at TSMC. We have absolutely nothing but the sheerest of speculation to indicate that ATi is encountering similar difficulties with TSMC's .13 micron process (for one thing, there is simply no need for ATi to ship a .13 micron R400 as long as the .15 micron R350 remains the leader in its class by a wide margin--a margin not to exclude great yields making for widespread product availability.) Since much of the difficulty in manufacturing a chip at any given process size stems directly from the architecture and circuitry of the chip itself, I think attempts to paint TSMC's .13-micron process as "universally flawed" have been far too general and broad. What I'm saying is that if ATi does have difficulty with R400 at .13 microns there is no reason to expect the difficulties will be of the same type or scope that nVidia has encountered relative to nv30. As the architectures are different so also will the problems be different. Seems we are painting the respective companies with the same wide brush--which is probably a mistake in that we are making too many generalizations.

For whatever internal reasons it looks as if nVidia is blaming the FAB and not the general engineering principles behind nv30 and its successors. And so they've switched FABs (instead of designs, is the tacit implication.) This can only have the effect of pushing nVidia's nv30 + proof of concept further out into the future, with no guarantees of success, it seems to me. The basic axiom that bothers me is that nVidia may discover (and at that point it will be quite too late) that sometimes that which is "impossible" at .15 might not only be "impossible" at .13 microns, too--but just generally "impossible" all the way around. It may very well be that budgetary and time constraints prevent nVidia from an nv30 redesign, and that all that's left is the attempt to find a FAB able to manufacture their upcoming chips to their specifications with satisfactory yields. nVidia may have already "blown its wad" on nv30 + R&D, so to speak.

(Again, this just my own idling conjecture...;))
 
I would think that depends on if Nvidia had been working behind closed doors before the annoicement of the partnership. Now if this is only a hope from Nvidia that IBM can overcome the problems still remaining then Nvidia may have more problems. It really sounds like if IBM will manufacture NV35 chips that Nvidia already knows that they can, at a good enough yield and so on. Otherwise it sounds like IBM will be working with Nvidia on the NV40 design.
 
It seems to me that the NV35 will *not* be manufactured by IBM.
But it also seems to me that nVidia will have to do a refresh to the NV35, in order to beat the R390. And that would obviously be manufactured by IBM.

I'd guess that would be available around early Q4 2003, maybe slightly later. I'd be surprised if they'd wait for the NV40 to use IBM.


Uttar
 
Uttar

It seems to me that the NV35 will *not* be manufactured by IBM.
It might be ported to IBM in future (just for practice with IBM's fabs), but first cards (coming in may and june) will use chips from TSMC.

But it also seems to me that nVidia will have to do a refresh to the NV35, in order to beat the R390.
WTF is R390? :) ATI is having probs with 0,13 right now... So it is really hard to tell when 0,13-version of R350 (R9900Pro) will come to market. They even still haven't launched cards based on RV350... There is some suspicions, that R9600/Pro might end up just like R9500/Pro - a cut down by software and PCB version of R350. That would be good for us (it'll be faster, than RV350, i suppose), but very bad for ATI's financials.

And, according to latest rumors, R400 is pushed back to 2004, while NV40 is still on track to cristmas 2003 (being made by NV2x-team, BTW).

But that's just rumors :) They can be wrong...
 
There was some reports of delays over @ X-Bit labs and there was one other source, but I can't remember where right now.
 
DaveBaumann said:
Well, I guess XBit haven't seen boards yet. Doesn't mean everyone hasn't though.

Na, just the grave majority.
If they can't deliver it's their fault.

I really wonder what was the source of the problems with the NV30? .13u? I doubt the move to a .13u by itself would cause that much of a delay.
 
IBM are attacking TSMC and UMC with a nasty pricing war. Seems like winning the nvidia contract has emboldened them alot!!

Here.
 
Back
Top