Fablite + outsourcing + big foundries == trouble?

... is it "payback" time?

"We've been running [manufacturing capacity] maxed-out for three years. We are now running in a fab-tight mode," said Penn "The chip makers are building fabs after the demand, and this changes the rules. The industry just hasn't realized yet."

Penn also described the coming years as "pay-back time" as he expects leading foundries such as Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) to use their strong positions to raise the cost of wafers. Some observers believe previously pure-play foundries may even be tempted to enter the market place and sell direct to leading OEMs.
http://www.eetimes.com/electronics-news/4211515/Welcome-to-the-fab-tight-era-says-Penn
 
It's certainly possible to an extent. Companies like ATI and Nvidia have benefited from the fact that overall prices set by TSMC had a bit of an upper boundry due to needing to set prices at a point where it was advantageous for companies to source excess production at TSMC rather than building another FAB.

However, if we get into a situation where companies are de-emphasizing building FABs for their own products and thus not building FABs for projected demand, then that moves more of the pricing power to players such as TSMC. In other words, they'll have more power to dictate what prices will be as more players will be dependant on them rather than them being dependant on trying to win excess chip manufacturing from companies who FAB the majority of their own chips.

So as more companies go fabless/fab lite/fab tight, TSMC gains more and more power to dictate wafer prices.

And considering TSMC has in the past made noises about inability to recoup investment at X wafer size and node before being pushed by customers to move to newer Y wafer size and node... There is obviously desire on their end to push up prices and/or slow movement to new wafer sizes and/or node sizes.

Regards,
SB
 
Evolution is cleverer than you are. And so is Morris Chang. (okay, maybe not one of the better variants of the quote :p)

Penn is certainly very good at having controversial opinions. I don't really agree with him about leading-edge logi thoughc; the economics of having your own fab just don't make sense unless there's no competition in the foundry business or you have an excess of cash. There was a risk that competition would slow down and TSMC would have more control over wafer prices, but thanks to Global Foundries I do not expect that to happen. Certainly some companies might have gone too far and wasted quite a bit of potential in the process - the Crolles group could be an example of that. But overall I think most companies have been quite pragmatic about this.

As for Sony/Nagasaki, I don't know the exact terms of the original agreement. I assume it forced (or at least encouraged) them to use that fab somewhat - so they couldn't become truly fabless via independent foundries even if they wanted to. That means their decision won't tell us much about fabless/fab-lite trends. A more extreme example in Japan in Fujitsu, which used to have its own fabs and even act as a foundry (e.g. 65nm VIA Nano) but is now using TSMC exclusively on 40nm and below iirc.
 
Something is happening at Toshiba indeed:

An unexpected report surfaced that Japan’s Toshiba Corp. will be outsourcing an undisclosed percentage of its leading-edge logic production to South Korea’s Samsung Electronics Co. Ltd.

The news came from Reuters, which cited the Nikkei business daily as its source.

The reported deal is a surprise: Samsung and Toshiba are bitter rivals in the NAND flash and logic markets.

[...]
http://www.eetasia.com/ART_8800630067_480200_NT_614163f3.HTM

Also, the analyst might have a point with foundries jacking up their prices way way up. Especially if only a few uber-mega foundries exist as everyone but them goes fablite/fabless.
Is the fablite/fabless model sound for many companies or is it a "hey look at our last earning reports... imagine we cross the "semiconductor spending" section out... look how big Excel told me the profit would be if I set that field to 0!" ?
 
Is the fablite/fabless model sound for many companies or is it a "hey look at our last earning reports... imagine we cross the "semiconductor spending" section out... look how big Excel told me the profit would be if I set that field to 0!" ?
For most companies, not being fabless is fundamentally unsound, and it seems for now that that set is only going to increase.

One way to look at a new fab is that it is a multibillion dollar cost with an obligatory amount of capacity that must be consumed to pay for that up-front cost.
Every node transition effectively doubles the capacity you need to consume, either with more product or larger chips with more design work.

Not many companies double their sales every two years. Not many markets grow exponentially.

AMD learned the hard way what happens when you can't generate enough internal demand for your fab throughput.

It's only going to get worse.
Even finer geometries cost even more to develop, and the cost growth in fabs looks to be climbing.
If Intel and TSMC have their way, there will be a seriously expensive shift to 450mm wafers, which only a handful of companies will be able to afford.

How many companies see their markets or ASP growing exponentially?
If not, then they can't afford a fab.

It would need to be a gigantic price increase to make a foundry uncompetitive versus a small producer paying for a fab it could never fully utilize even without the few years it has to pay off before the next transition.

If that happens, it would be cheaper to close up shop or to refuse to go to the next node and find a way to survive on older processes.
 
For chipmakers like AMD going fabless is stilla dangerous bet, one they HAD to take. With a price increase in foundries (TMSC and Global Foundries might treat them less and less nicely too) + slower pace of technological innovation in foundries' processes (vs what Intel produces) + the distance between the people who design the circuits and the ones who work on the manufacturing processes = challenging Intel in the CPU business gets tougher and tougher. Something ARM must be concerned about as technology allows Intel to make its x86 baggage less and less dangerous for smartphones batteries.
 
AMD looks to be setting its sights on being a minority player in x86, and has adjusted its methodology to match. Nothing in AMD's posture for current or future products indicates any of the bluster that Sanders or even Ruis had with regards to winning anything beyond what scraps Intel feels is not worth fighting over.

One of the big "features" mentioned for the Bulldozer scheduler is that it does not use "exotic" circuit techniques or heavy reliance on custom design.
Right off the bat, that indicates AMD has left a measurable amount of performance on the table, especially against an Intel that can continue to optimize its circuit designs.
Bobcat is a synthesizable design that is meant to be applied to foundry processes, which are performance laggards as a rule.

One former AMD employee claimed the design philosophy that drove K7 and K8 was discarded before AMD dropped its fabs. It went from a modestly sized and expert team that drove circuit customization and process tweaks hard to a larger pool of engineers that went for more automated layout, less aggressive circuits, and synthesizable designs. I don't have the insider knowledge to confirm the truth of it, but the products coming out appear to match the claims.
A lot of the old hands from K7 and K8 are gone, and the level of design success we've seen so far is the apparent measure of what replaced them.
 
AMD looks to be setting its sights on being a minority player in x86, and has adjusted its methodology to match. Nothing in AMD's posture for current or future products indicates any of the bluster that Sanders or even Ruis had with regards to winning anything beyond what scraps Intel feels is not worth fighting over.

One of the big "features" mentioned for the Bulldozer scheduler is that it does not use "exotic" circuit techniques or heavy reliance on custom design.
Right off the bat, that indicates AMD has left a measurable amount of performance on the table, especially against an Intel that can continue to optimize its circuit designs.
Bobcat is a synthesizable design that is meant to be applied to foundry processes, which are performance laggards as a rule.

One former AMD employee claimed the design philosophy that drove K7 and K8 was discarded before AMD dropped its fabs. It went from a modestly sized and expert team that drove circuit customization and process tweaks hard to a larger pool of engineers that went for more automated layout, less aggressive circuits, and synthesizable designs. I don't have the insider knowledge to confirm the truth of it, but the products coming out appear to match the claims.
A lot of the old hands from K7 and K8 are gone, and the level of design success we've seen so far is the apparent measure of what replaced them.

Seems to be mostly but not completely correct. They appear to be bullish (and reasonably aggressive, within their constraints of course) on their fusion parts.
 
I've seen this described third-hand in various forums, but this may be a decent description.
http://www.eetimes.com/electronics-...-Restricted-design-rules-challenge-DFM-s-role

My understanding is that restricted design rules place limits on the orientation, spacing, and pitch of silicon features, in order to make them more regular and avoid some problematic corner cases.
This regularity in turn allows lithography equipment to reliably print the tiny features at advanced nodes.
The apparent cost is a penalty to area and delay, though this can apparently be pretty small.

Tools need to be changed to take this into account, so there may be some additional engineering work that needs to be done.
On the other hand, making designs work without those restrictions can be very demanding as well.
 
Back
Top