Entropy said:
Graphics ASICs are excellent examples of chips that benefit greatly from higher transistor density, so it seems a given that ATI and nVidia will use finer lithographic methods almost as soon as they are available. They may not be in the absolute forefront - their chips aren't expensive enough to warrant the real bleeding edge - but close enough.
As Entropy pointed out, TSMC isn't known as a 'technology' leader; their historical stength has been cost and process-reliability (low defect rate, very good agreement between predicted wafer variation and actual measured wafer variation.) Having said that, TSMC has now reached the point where they can potentially become a 'victim' of their own success:
1) TSMC is far from the 'cheapest' (based on delivery of finished, but untested wafers) -- TSMC's nearest competitor (in terms of business position, fab capacity, and technology) - UMC - is roughly 20% cheaper across the board (on standard digital-logic fabs.) SMIC and Chartered are up to 50% cheaper on the 'mature' technology nodes (0.18u, 200mm wafers.)
2) TSMC's leading-edge production processes (0.13u) is overbooked.) They were overbooked back in the golden days of 2000-2001, and now they're overbooked again. UMC is nearly saturated
3) Broadcom and LSI Logic are both large fabless companies which are emigrating from TSMC to SMIC, presumably due to more competitve pricing at SMIC. So at least some companies are willing to gamble with the cheaper fabs for potentially lower die-costs.
As a whole, discrete-component (as in retail-bought add-in boards) graphics ASICs are high-volume commodity ICs. They thrive on cost-optimized, not performance optimized, foundry processes. ATI/NVidia probably care foremost about raw cost per wafer, and afterwards worry about how to squeeze the last MHz out of their chosen-fab.
Yet 1) -> NVidia fabbed some 'high-end' parts at IBM, but I can't imagine IBM is able/willing to make a long-term commitment to discounted wafers for NVidia. -- And why would they? In a capacity crunch, would you fab G5 PowerPCs to sell for $500, or NVidia VPUs to sell to NVidia for $50? And that's why the fabless companies try to avoid large IDMs like IBM; you're a small-fry customer (compared to the IDM's internal customers) with no real leverage.
and 2)-> ATI (or rather, ArtX) did tape-out the Nintendo Gamecube's GPU at NEC. (But me thinks customer's politics, Nintendo, and the design's large on-die 1T-SRAM played a key role in that decision process.) Based on what my coworkers have told me, the Japanese IDM foundries do not offer competitive wafer-pricing (unless you're a very special customer like Nintendo, Sony, etc.) They do have better system-level on a chip integration (more IP, packaging/test assistance.), but in business terms, all IDMs share the IBM's conflict-of-interest (external-customer vs internal-customer.)
I'll let someone guess/speculate/predict ATI/NVidia's future wheeling&&dealings @ 90nm. Despite possesing some knowledge of the semiconductor industry, I have enough common-sense to realize 1) the billion-dollar GPU market doesn't obey 'general trends' 100% of the time, 2) skimming snippets of eetimes.com articles doesn't make me any more qualified to comment on ATI/NVidia than another guy who reads ign.com all day, and 3) other people on this board work with ATI/NVidia, and they are in a much better position to comment on this.