NVIDIA TSMC's 0.11 Micron Technology

rainz

Veteran
Santa Clara, CA and Hsin-Chu, Taiwan – Feburary 24, 2004 – NVIDIA Corporation (Nasdaq: NVDA) today confirmed that it will be one of the first semiconductor companies to manufacture select up-coming graphics processing units (GPUs) at Taiwan Semiconductor Manufacturing Company’s (TSMC’s) (TAIEX: 2330, NYSE: TSMC) 0.11 ìm (micron) process technology. NVIDIA will combine TSMC’s 0.11 micron process with its own innovative engineering designs, to deliver high-performance and low-power consumption in a graphics processor.

TSMC’s 0.11 micron process technology is fundamentally a photolithographic shrink of its industry-leading 0.13 micron process. The process will be available in both high-performance and general-purpose versions using FSG-based dielectrics. Though actual results are design-dependent, TSMC’s 0.11 micron high-performance process also includes transistor enhancements that improve speed and reduce power consumption relative to its 0.13 micron FSG-based technology.

TSMC began 0.11 micron high-performance technology development in 2002 and product qualified the process in December of 2003. Design rules, design guidelines, SPICE and SRAM models have been developed, and third-party compilers are expected to be available in March. Yields have already reached production-worthy levels and the low-voltage version has already ramped into volume production. The 0.11 micron general-purpose technology is expected to enter risk production in the first quarter of next year.

Source: nvidia

( From Guru3D )

RainZ
 
Confirming that they will be one of the first in a 2 horse race doesn't really say much at all.
 
{Sniping}Waste said:
Nvidia has a problem with it IC layout team. It just plain SUCKS and are a bunch of lazzy auto routers.

I don't think you have a clue what you are talking about.
 
Truthfully, in .13 and .11, everybody is a bunch of "lazy autorouters". Doing it by hand is plain nuts.

Or so I've been told.
 
RussSchultz said:
Truthfully, in .13 and .11, everybody is a bunch of "lazy autorouters". Doing it by hand is plain nuts.

Or so I've been told.

Granted, new gpus are much more complex than the pentium 4, but IIRC, the p4 had its layout done by hand (at least the major functional units and layout)....Not sure if intel kept on doing this for the prescott though.
 
lost said:
RussSchultz said:
Truthfully, in .13 and .11, everybody is a bunch of "lazy autorouters". Doing it by hand is plain nuts.

Or so I've been told.

Granted, new gpus are much more complex than the pentium 4, but IIRC, the p4 had its layout done by hand (at least the major functional units and layout)....Not sure if intel kept on doing this for the prescott though.

Well if they didn't it would explain the reasonable unimpressive preformance of the chip.
 
Granted, new gpus are much more complex than the pentium 4, but IIRC, the p4 had its layout done by hand (at least the major functional units and layout)....Not sure if intel kept on doing this for the prescott though.

Judging engineering complexity solely by transistor count is a major fallacy imho, as amongst other things gpus operate at much longer cycle times and employ massive functional redundancy. Also control flow is much less complex. Another thing are the vastly larger developement resources that go into designing Intel's processors.
 
lost said:
Granted, new gpus are much more complex than the pentium 4

Complex, how???

A chip that's designed to run a mish-mash of old and new instructions at multi-GHz frequencies surely beats out what is to a large extent nothing but a cut-and-paste jobbie in the complexity department.

If a new GPU really was more complex, we'd have to wait YEARS inbetween new hardware generations.
 
This conversation again? They are very different.

As with last time, I would say that the chief engineering challenge with an x86 CPU is correctness - it's got to be absolutely right in every respect.

x86 decoding, etc. is reasonably complicated but fundamentally it's just a good old state machine. I'd guess it's not that much of the die.
 
IIRC, the Prescott FPU was laid out using automated tools (developed in-house at Intel) - dunno about the rest of the processor, but I would expect automated tools to have been used extensively for, say, register files, SRAMs, and clock net generation, at least.

As for transistor count as measure of complexity, that has been discussed here several times before; it's not a very good metric - most people would say that a Prescott or an R350 is more complex than a 512Mbit DRAM, despite the latter having easily 4 times as many transistors. Better measures of complexity would be transistor count divided by regularity (where regularity is a measure of how much repeating identical subcircuits there are in the circuit; the DRAM would have hundreds of times higher regularity than the Prescott, with GPUs falling somewhere between), man-hours that went into the design, or HDL code lines (for designs written in HDLs).
 
I really don't see why people should be doing layouts by hand. The calculations on what is and is not possible are quantum in nature, and thus very challenging to do properly. The skill lies in developing the appropriate approximations (doing the full calculation is simply impossible when it comes to complex quantum systems), and in developing the best algorithms in automatic routing given the constraints laid by those calculations on what can and cannot be done.

Granted, there are always flaws in the best of algorithms, which may require people to modify designs by hand, but the basic layout should never be done by hand these days.
 
Wasn't nvidia's transition to .13 micron for its current generation GPU's one of the major reasons why they were so slow to roll out cards based on them?

I seem to recall reading that a lot of the delay was blamed on their use of this, at the time, unproven process... Are they gambling again this time around?
 
Runner said:
Wasn't nvidia's transition to .13 micron for its current generation GPU's one of the major reasons why they were so slow to roll out cards based on them?

I seem to recall reading that a lot of the delay was blamed on their use of this, at the time, unproven process... Are they gambling again this time around?

The reasonr for the slowness was that nVidia developed for the Black diamond Low-K process, not the ordinary 0.13 micron process.

When they were forced to switch to the ordinary process they were further delayed by TSMC revising its design libraries and rules for the process.
 
radar1200gs said:
The reasonr for the slowness was that nVidia developed for the Black diamond Low-K process, not the ordinary 0.13 micron process.

When they were forced to switch to the ordinary process they were further delayed by TSMC revising its design libraries and rules for the process.

Ok, but are they gambling again this time around? :) Or is .11 mature enough?
 
Back
Top