The Process Race: Economic POV?

Arun

Unknown.
Moderator
Legend
In the GPU industry, you can see the process race economics at work with AMD and NVIDIA following quite different timetables, the latter justifying it through economics rather than technical failures. Some of the factors at play include:
- Better perf/watt for newer process nodes, giving an advantage in the laptop/handheld markets.
- Potentially lower sustained yields due to less time and 3rd party process experience.
- Always lower yields at first.
- More transistors/wafer (+).
- Higher price/wafer.
- Higher risk.

But these are not the only companies taking very different approaches to the process race. Some examples, excluding Intel/AMD:
- Texas Instruments is demonstrating 45nm chips with baseband integration today, but the new OMAP35xx family remains on 65nm.
- Qualcomm has functional 45nm chips today, and they were the first to tape-out on TSMC's 45LP process. These chips are digital-only.
- Broadcom is asserting that they won't switch to 45nm soon, because 65nm is a 'very, very good' process node and 45nm won't be cost effective enough for some time. Broadcom integrates plenty of analogue and RF on most of its chips.
- Atheros, which claims to have (by a huge margin and with many industry experts believing it) the lowest-power single-chip WiFi solution in the industry, is still on 130nm. The majority of the area and power probably come from the RF.
- Icera, which is an UK baseband startup with lots of funding, only got 65nm samples back from the fab recently. Icera's chips are digital-only, but they also use a lot of full-custom logic which takes longer to design.
- CSR, which manufactures single-chip Bluetooth (and WiFi) solutions, will only start mass production on 90nm in late 2008. Economics once again, in part due to the amount of RF that doesn't scale.

Certainly one tendency there is that companies with low levels of analogue/RF integration tend to lead the process race, and that makes a lot of sense. However, even the likes of Texas Instruments aren't being *that* aggressive in terms of process technology this time around (OMAP35xx chips that'll start sampling in 2H08 are still on 65nm).

One reason for Broadcom and TI's slower pace of adoption may be related to the lack of public information about the power efficiency of TSMC's 45LP process. Historically, they've nearly always disclosed that data in their PRs or on their website - but now they haven't, and neither have UMC. Certainly if this implies the power efficiency gains are lower than usual, that makes leading the process race less of a necessity in the handheld market.

However, it still seems to me that it's a good idea to lead the process race when you're digital-only like Qualcomm or TI, and this leads me to what I was wondering about: does anyone here have a good idea of why you wouldn't want to do so when not integrating analogue/RF? Especially in the handheld market where power efficiency matters and design cycles are so long that process yields at the start of mass production are nearly always good.

The only reason I can think of is my 'lower sustained yields' point, which I've heard from a fes sources including a semi-old NVIDIA presentation by Chris Malachowsky. I'm not sure I understand all the dynamics there though, and I'd be curious if anyone has more knowledge than I do there. And does anyone have an idea of other factors that might come into play?
 
I was under the impression that TI has pretty much given up on in-house digital process development, though perhaps it will continue to try to scale analog products.

I'm only vaguely aware of the analog side of the equation, but I've seen commentary on the slower progress in developing on-chip analog components.

It's already been commented before that analog doesn't scale as easily as digital down to finer geometry, and I've read (but cannot verify) that one big complaint is that process variation can hurt the yield for analog devices even more than digital.
 
It's already been commented before that analog doesn't scale as easily as digital down to finer geometry, and I've read (but cannot verify) that one big complaint is that process variation can hurt the yield for analog devices even more than digital.

Analog devices replicate/amplify signals and you don't want signals to be distorted in the process. So in this case you want to have consistant transistor properties. On the other hand, you only need to recognize between on and off in a digital circuit.
 
I was under the impression that TI has pretty much given up on in-house digital process development, though perhaps it will continue to try to scale analog products.
Yeah, I was still under the impression they would be very aggressive in terms of process technology at the foundries though, but apparently that won't be the case (at least for OMAP).

I'm only vaguely aware of the analog side of the equation, but I've seen commentary on the slower progress in developing on-chip analog components.
A very easy way to consider that factor is to look at Chipidea's IP: besides USB, they don't really have anything on 45nm yet despite quite a few companies having taped-out digital chips in the process. And RF is even worse obviously...

and I've read (but cannot verify) that one big complaint is that process variation can hurt the yield for analog devices even more than digital.
bearmoo said:
Analog devices replicate/amplify signals and you don't want signals to be distorted in the process. So in this case you want to have consistant transistor properties. On the other hand, you only need to recognize between on and off in a digital circuit.
That makes sense. I wonder if replicating the analogue on different sides of the die (for redundancy) would fix the yield problem. Of course, that's far from the cheapest solution... Do you have any idea if this kind of variability improves over the process' lifetime?
 
That makes sense. I wonder if replicating the analogue on different sides of the die (for redundancy) would fix the yield problem. Of course, that's far from the cheapest solution... Do you have any idea if this kind of variability improves over the process' lifetime?
Analog circuits are probably big, not sure you'd want to spend that much area on replication...
Also the variability problem doesn't just involve a more uniform across wafer/die thing. Even a mismatch between 2 devices right next to each other can be a problem.
 
Yeah, I was still under the impression they would be very aggressive in terms of process technology at the foundries though, but apparently that won't be the case (at least for OMAP).

TI has had a long tradition of reporting actual channel length instead of drawn length. This make it look like TI has been at the forefront, pushing the lithographic limit for chip production, where in fact they have always (at least the last 15 years) lagged behind.

Cheers
 
However, it still seems to me that it's a good idea to lead the process race when you're digital-only like Qualcomm or TI, and this leads me to what I was wondering about: does anyone here have a good idea of why you wouldn't want to do so when not integrating analogue/RF? Especially in the handheld market where power efficiency matters and design cycles are so long that process yields at the start of mass production are nearly always good.

The only reason I can think of is my 'lower sustained yields' point, which I've heard from a fes sources including a semi-old NVIDIA presentation by Chris Malachowsky. I'm not sure I understand all the dynamics there though, and I'd be curious if anyone has more knowledge than I do there. And does anyone have an idea of other factors that might come into play?

From my analog design experience, most of the area goes to passive devices (resistors, capacitors), which usually do not scale really well with miniaturization. Sometimes you win some in area but lose heavily in the (process) variation, which is a nightmare to design for. As for transistors, the good thing is that the power efficiency goes up, but on the other hand it will produce more noise, and actual characteristic is quite unpredictable. Next to this, for smaller devices more and more effects take place for which the analog designer has to take care off, which means you need some good analog designers (who are a rare species, I reckon).
 
Analog doesn't scale well, if at all. The only reason to migrate to smaller process geometry in mixed signal chips is to avoid multiple die and get analog and digital on the same die. Well, that and to keep in the 'sweet spot' of the fab processes so your wafer costs are as low as they can be.

If you need a 1W speaker amp, the transistors, caps, and resistors need to be the right size to handle that power, and a minimum sized gate just isn't going to do it.

In other words, if you look at a company like TI or Maxim, you're going to find almost all of their analog chips are at .18u or .25u (or even .35u). Nobody's looking to go to 45nm to improve their analog. Its 'worse' due to noise, and the area doesn't go down at all (which means the cost goes up)
 
Back
Top