ATI to be TSMC's largest customer

Joe DeFuria said:
http://www.digitimes.com/NewsShow/Article.asp?datePublish=2003/08/27&pages=A4&seq=57

This quarter ATI is expected to replace Nvidia to become TSMC’s biggest customer, sources said.

Interestingly, the blurb also notes that ATI has qualified UMC's 0.13u process...

That is interesting (although they say verified - "qualified" would probably indicate wafer output of some kind) because aren't UMC already producing fairly complex ASICs at the 0.13u copper + low-k tier?

MuFu.
 
zurich said:
Not surprising really since NVIDIA has signed on with IBM.

Sort like of how ATi is projected to double its UMC orders...? I would tend to think that nVidia's getting hit rather hard in the systems OEM segments in the middle-high range. I've never seen nv35U's with their hot, dual-backplane, large heatsink reference designs capturing the enthusiasm of the systems OEM markets (Dell, etc.). ATi's had a big advantage there for quite sometime. Additionally, they've been hit by yield problems for nv35 that, according to nVidia in reports I've read recently, won't be satisfactorily addressed until September (assuming those projections prove out.) So I'd say that lack of a solid OEM-attractive reference design, and yield problems, have resulted in a drop-off in orders for nVidia's chips in the past year.

I also find it distasteful, if not misleading, how articles like this lump all of the discrete graphics chip markets together in one great big pot.
 
nVidia's been spreading between TSMC and IBM, and most recently they seem to be adding UMC. I'm wondering if splitting between all the different foundries will end up being overall better or worse for them. One can take certain advantages from each, but then one is also splitting one's design efforts between different procedures... <shrugs>
 
If you've got the fabs knowing you can switch, you have a better hand when it comes to the negotiating table.
 
RussSchultz said:
If you've got the fabs knowing you can switch, you have a better hand when it comes to the negotiating table.

A GPU design is pretty process specific it takes hard work to port from for instance IBM to UMC. That is why nVidia has the only one foundry per core policy.
 
A GPU design is pretty process specific it takes hard work to port from for instance IBM to UMC. That is why nVidia has the only one foundry per core policy.

I was under the impression that until possibly recently, this wasn't the case. I know CPU use a lot of custom work, but I was under the impression for the most part GPU have little to none.
 
Saem said:
A GPU design is pretty process specific it takes hard work to port from for instance IBM to UMC. That is why nVidia has the only one foundry per core policy.

I was under the impression that until possibly recently, this wasn't the case. I know CPU use a lot of custom work, but I was under the impression for the most part GPU have little to none.

All IO parts and DACs are analog logic they have to be process specific. The GPU core itself is pretty easy to port but you will loss the advantage you have from hand tweaked speed paths.
 
Its not generally as hard as you're describing. You certainly don't have to start from scratch to get.

IBM, for example, has/had been offering free mask sets to get companies to try out their fabs. It _generally_ is an excercise in resynthesizing the chip and laying it out. Those aren't trivial tasks, but should be doable within a couple of man-months.
 
RussSchultz said:
Its not generally as hard as you're describing. You certainly don't have to start from scratch to get.

I did not mean imply that it was that difficult it is very doable. But it is still something that you want to as little as possible.

IBM, for example, has/had been offering free mask sets to get companies to try out their fabs. It _generally_ is an excercise in resynthesizing the chip and laying it out. Those aren't trivial tasks, but should be doable within a couple of man-months.

Only a couple af man-months are you sure? The analog parts is a bit more than just a resynthesizing can 10 man really handle this job in a week?
 
Analog is a different story. Many times companies purchase the analog IP, because its such mojo. These companies will have it ported already.

Or, the in-house analog engineer will run sims to make sure the characteristics of the process doesn't affect the design too much.
 
zurich said:
Not surprising really since NVIDIA has signed on with IBM.

It is a bit surprising, considering ATI also has UMC....and UMC fabs high quantity parts like the RV280. Both nVidia and ATI have alternative fabs, and I'm willing to bet UMC has a higher volume of ATI's chip business than IBM has of nVidia. (Don't know for sure though.)

cthellis said:
nVidia's been spreading between TSMC and IBM, and most recently they seem to be adding UMC.

AFAIK, nVidia's only UMC business is through their recently acquired MediaQ.
 
IBM, for example, has/had been offering free mask sets to get companies to try out their fabs

If I'm not mistaken, many of the IDM-foundries (Samsung, IBM, LSI, etc.) offer free or greatly discounted mask-sets. It's difficult to directly compare pricing schedules, since the service-oriented foundries generally deliver known good dies (tested/packaged), whereas TSMC/UMC deliver processed wafers. (There are exceptions to this rule.) While TSMC/UMC make you pay for the (expensive) mask-set up front, their per/wafer pricing works out cheaper than the other arrangement (known good die.)

That is interesting (although they say verified - "qualified" would probably indicate wafer output of some kind) because aren't UMC already producing fairly complex ASICs at the 0.13u copper + low-k tier?

In the context of the article, I'm *GUESSING* ATI ran a couple of prototype circuits through their internal design-flow, did a tape-out, then tested the resulting (engineering) silicon. It's basically a routine exercise to reconcile process characteristics of library/model versus the real-world silicon. Once upon a time, people only ever did this if they had some analog or mixed-signal circuitry in their designs. Nowadays, with those pesky "deep submicron effects", fabless companies are qualifying new processes for straight digital-logic designs.

Now I'm not sure how the service-based foundries (the ones that deliver 'known good die') operate, but I'm guessing they internally manage these details for the customer. The same work needs to be done regardless of the business contract, but the foundry hides this complexity from the customer, and generally just quotes a longer lead time from tapeout to production silicon. Wouldn't surprise me if elite foundry customers (like NVidia) get 'the inside track' on whatever goes on at the foundry.

This article http://www.eetimes.com/story/OEG20030807S0012 implies UMC lost some ground to TSMC at the 0.13u, due to missteps related to low-K.
 
> FWIW, one of my production friends recently has said that IBM is very competitively priced.

> I'd presume in at least .18u and .13u. Dunno about the others.

I think there is a price-war on the more mature (0.18u and above) process nodes. While TSMC has maintained high utilization rates through the downturn, everyone else (UMC, etc.) suffered markedly lower utilization rates.

Basically, IBM and the other folks are fighting over table-scraps that TSMC has dropped. Since TSMC's capacity is near saturation, I would have expected the 'other guys' to receive the spillover business by *default* (since TSMC is full.) Yet, even with all their wafer pricing discounts -- UMC is 20% cheaper versus TSMC, Chartered and SMIC are up to 50% cheaper for 0.18u 200mm -- their marketshare has shrunk over the past year. Puzzling...<< opps, IBM's marketshare grew dramatically from 2001 to 2002. >>

"IBM's foundry challenge"
http://www.reed-electronics.com/eb-...e&articleid=CA312953&pubdate=8/1/2003


I've heard the same thing, but people are worried that this is only a 'temporary' thing. Historically, IBM has been one of the most expensive fabs, due to their reputation for leading-edge technology. (And partly because IBM's foundry-pricing reflect the profitability of IBM's internal ASIC/IDM divisions.) Until IBM has enough customers to reach some fab utilization quota, they'll aggressively push 'customer attraction', which includes greatly discounted wafer pricing. But how long will that last? They have surplus capacity due to the economic downturn, and when the economy picks up, that capacity could dry up very quickly.
 
Back
Top