The effect of 450mm wafers on die size, yield and chip design

anexanhume

Veteran
With 450mm wafers hitting in the next half decade (and Intel showing off a 450mm wafer), I've started to think about what affect this will have on the industry.

Obviously, it will eventually make chips cheaper than 300mm once the huge initial cost of development and capacity tapers off.

My questions are centered on what it will have on chip design and yield. First, will GPU makers and CPU makers be willing to spend more silicon for more performance at less of an impact to yield and cost? Will more die space further enable the trend we've seen of bigger, but slower to meet performance targets? I'm curious if chip makers will look more into pushing the NTV limits since they have more silicon to make up for the speed decreases.

Second, any expected profound change to yields? Will more chips per wafer having a finer granularity normalize any kind of yield fluctuation, or is that even a concern? Will a larger wafer introduce any kind of uniformity concerns that could impact yield?

My interest in the second is mainly a driver of my interest in the first. My ultimate question is if cheaper transistors will spark any kind of renaissance in design or if it is seen as merely a way to increase margins.
 
I don't expect any profound advantage for anyone outside of the Fabs themselves. They may choose to keep costs down for customers.
 
With EUV being a no show and new processes not showing much decrease in price/transistor, if anything I expect die sizes to shrink somewhat to accomodate costs.
 
AMD's 65nm transition was a node transition and a jump to 300mm wafers.
Intra-wafer variation is more difficult to manage over a broader area. There a mechanical and chemical steps that have to maintain the same tolerances and concentrations over more area.

I've seen some blame laid at the feet of SOI and/or AMD's troubles with the node and wafer transition as to why its gate oxide failed to significantly scale from 90nm to 65nm, and why its SRAM density was notably worse than Intel's. The reason for the oxide and SRAM cell bloat was reliability in the face of variation.

The thing about 450mm is that without it, Moore's Law is going to have a harder time being maintained. This expensive transition may be just enough to keep up with where everyone thought we were going. Shrinks alone are obviously not going to do it.
 
It seems to me that increasing wafer sizes is more about improving throughput than yield. The same number of machines creating more good dies increases efficiency and lowers cost. Increasing throughput is also necessary because things like double patterning increase manufacturing time so part of the wafer size increase is just compensating for the current increase in manufacturing time.
 
Also about centralization, as I recall, Intel said that with 450mm wafers, 7 fabs could supply the entire world markets worth of chips. The less fabs they need the better, they are getting to the point that they cost more than the economies of some countries.
 
If I'm not mistaken, the defect rate tends to increase as you get closer to the edge of the wafer. I don't remember why exactly, but I think there are steps in manufacturing processes that reach optimal efficiency in the center, and minimal efficiency on the edge of the wafer.

I would imagine that increasing the radius of wafers by 50% will only make this problem worse and, all other things being equal (which in reality, they won't be) would make yield worse, where yield is the percentage of "good" dies. On the other hand, it more than doubles the total number dies produced, so that's a problem I imagine foundries and IDMs can live with.
 
Back
Top