Yield, packaging & testing cost for GPUs

MyEyeSpy

Newcomer
Greetings fellow technocrats!
GPU chip packaging & testing, when talking about the high-end (R600, G80, G71), what sort of money are we talking about?

http://aceshardware.com/forums/read_post.jsp?id=120082274&forumid=1
The SiS648 is an 839 pin chip. When paired with the 371 pin SiS963 southbridge the chipset had a list price in the low $20's when it first came out; of that $20 to $25, their royalty to Intel was about $4. The process of packaging and testing those two fairly high pin count chips can't be terribly different from packaging and testing a CPU.

Is the cost for packaging linear with complexity or does it work like wafer yield?

Speaking of wafer yield, how does the yield for Nvidia go with G80 considering the redundancy with 8800gts? I know it's a big "hush hush" trade secret, but what do you think Nvidia & ATI pay for their wafers, in very rough numbers?
 
Is the cost for packaging linear with complexity or does it work like wafer yield?

It doesn't work like yield. And it's not linear with complexity either. Package cost can vary wildly based on the number of balls, the thermal properties, number of substrate layers, and volume. A custom package will obviously be more expensive than an off-the-shelve part, but I doubt you'll find off-the-shelve packages for 200W chip. (I once tried to look for it on the Amkor website.)

Years ago, we tried to squeeze every chip we could make in a TQ144 package: everybody was using it and it was dirt cheap (< $0.50). Going one step higher to a TQ168 (I believe) would drastically increase the cost of the package. A heat spreader can double the cost of an off-the-shelve package. BGAs have substrate (a little PCB basically) than can have 8 or more layers. etc.

I find it very hard to believe that a GPU package will cost just 'a couple of bucks', if that means $2, but who knows? A CPU package has volumes that are an order of magnitude higher, less pins and lower power characteristics, so it has to cost less.

In your post on Ace, you refer to a yield calculator, but I only found their payware model. Any links?
 
ICknowledge gross die & net die calculator

http://www.icknowledge.com/misc_technology/miscelaneous_technology.html

Around mid page. Happy hunting ;)
Gross and Net Die Calculator - The Free version calculates whole die per wafer based on wafer size, die size and edge exclusion, calculates net die using a Murphy, SEEDS, exponential or Poisson model. Left click to view, right click and "Save target as..." to download, 234Kb. Requires excel 2000 or higher.
 
8800 die cost

The 8800 die is roughly 484 mm2 (can't find W/H so assuming 22x22) and made on 90-nm tech. As the exact costs for wafer processing indeed is "hush hush" and a trade secret we can' tell the exact cost, but we can speculate. I think it is fair to say that with Nvidias volume in mind they get a quite good price.

I have found all kind of price references for 300mm wafers at 90nm, but the one most recurring is 3k. Assuming this is somewhat accurate, I believe Nvidia pay quite a bit less, but lets assume 3k. With a die area of 484 mm2 (22x22?) & 2mm spacing (anyone got any better references?) we can fit 117 dies on each wafer. The obvious, IT IS A BIG DIE. ICknowledges calc give me 14,2% with Murphys & 29,2% with exponential. I believe they have some sort of effective redundancy & indirect redundancy in the form of 8800gts. 8800gts got 96 stream processors (vs 128 in the GTX) which translates to 75% of the streamers at GTX. GTS got 20 ROP's vs 24 on the GTX, which translates to 83%. Also the 320 v 384 bit memory interface must provide some form of redundancy. Basiclly, we see a lot of redundancy, a whole lot of areas can be struck by the devil without rendering a die useless.

Yield. What can be expected? I am by NO means an expert, but I seriously doubt the yield is as bad as the numbers stated. From the ones I have talked to, CPU yield is somewhere between 75-90%. I seriously doubt it is anywhere up there. GPU's are "quite" (hehe) complex & freakin huge! My guess, with no references whatsoever, is that the yield is somewhere between 45 & 55 %.

117 dies / wafer
3k cost per wafer
45% (52 dies)-55% (64 dies) yield.
Very roughly, that translates to $ 57-46,9 per die.
Should the yield be 75%, the cost per die is a bit over $ 34.

Packaging. What can it cost? No idea. The reference I gave earlier and the 2 part Chipset that cost in low 20s including 5ish royalty tells me it can't be expensive. And to some degree you can compare it because a chipset must feed a GPU with yum yum from the CPU. Help? $ 5-7?

As for volume, anyone have any numbers for how many 8800s that are sold/produced?
 
Then the spacing was not completely out of line if only 1 die differed :LOL:
TY, haven't seen a G80 wafer before.

Anyone got any idea regarding what the yield could be? I fairly confident it is between 30% & 90% with a guess of 45-55 but it is an uneducated guess. Think it could be higher? 70-85 with all the redundancy?
 
8800 die = $17-28?

Seems 3k for 90nm was not too low, but too high?

http://biz.yahoo.com/prnews/070426/cnth021.html?.v=8

In SMIC's quarter-report they state that their ASP (average selling price) was $904. They do not state if it is for 200 or 300mm wafers, but lets assume it is 200. Revenue was split as follows:

0.09um 14.4%
0.13um 38.1%
0.15um/0.18um 37.0%
0.25um 0.7%
0.35um 9.8%

Again, for arguments sake, lets assume 100% was 15/18. According to numerous PDF's from TI each generation increase cost by 20% (seems low?) when mature.
Depending on if you consider .15/.18 one of two techs, and if so, do we start at .18? Lets do, for arguments sake. From .18 to .15 to .13 to .9

From 1k to 1.2k to 1,44k to 1,728k. Remember, we assumed asp was ONLY on .16/.18, we assumed these as two techs and based it on .18 and if you are a big customer you get a lower price than asp. Seems nvidia most likley pay less than 1500 per wafer? :oops:

Half of what I estimated and calculated on above. Can it be?
If so, with 45-55% yields, 8800 dies cost $28,5-23,45, and if yield is as high as 75% (unlikely, but what do I know?), they pay $ 17.
R600 & 8800gts doesn't seem so cheap suddenly. I know we pay for pcb, ram, manufacturing etc & middle hands & R&D, but with a margin in the 50%s for ATI & Nvidia, high end must be a VERY small part of their earnings. Anyone got numbers on exactly how small?
 
I don't think SMIC is a good example to base your assumptions on, MyEyeSpy, because they exclusively compete on price. For example, UMC is always cheaper than TSMC, but their process is of lower quality (performance, yields, etc.) - and that's likely even more true for SMIC. Also, ASPs are always given in 200mm equivalent wafers, while 118 chips/wafer is based on a 300mm process. Assuming a similar cost per mm², which is most likely not completely accurate, a 300mm wafer might cost up to 2.25x more.

If I remember correctly, NVIDIA told us outright that they were getting about 80 usable dies (for GTS and GTX, so including redundancy) per wafer. I was told by someone else that this likely is the number of maximum dies excluding yields, because of spacing and space at the edges. But I would tend to believe 80 is, in fact, the correct number. Why would they actually place 118 dies on the wafer if most of them are always unusable?!

NVIDIA said in the Q1 2007 conference call that they sold ~400K G80s and had revenue of ~$50M for the chip, which means that their ASPs are $125 USD (average of GTS and GTX). Even assuming a wafer cost of $4000 (which is a tad high but not impossible, it's 90GT and not 90G after all) and packaging+testing costs of $10+/chip, that gives us ~50% margins. Not too shabby, and still above the corporate average. A much more optimistic estimation of $3000 wafers and $7.5/chip packaging+testing would result in 65% gross margins.

It is certainly worth noting that an unique characteristic of the G80, compared to previous GPUs, is that there is nearly no single point that can die and leave the chip unusable. ALUs, TMUs, ROPs, Memory Controllers, etc. - everything can be disabled if it's defective. Smaller units such as triangle setup, rasterization, attribute fetching, etc. cannot easily be disabled without making the chip useless, however, so either that stuff is duplicated or it can cause the chip to fail. There obviously are other parts that are potential 'weakest links' and that we don't know about or that I'm not thinking of, though.
 
Indeed a fatal blunder, there we see what happens when you have had too much fun, post too late & too quickly :oops:

Many thanks Arun, you gave me an answer to nearly every question I had :D
 
Testing can be quite expensive, depending. And I guess the costs of burning the actual paths used (rerouting around bad units) is with packaging.

Other than that, the single most important consideration for the packaging costs is pin density, with the absolute amount of pins being a good second. And that one is a good second to determine cost for just about all stages, because pin areas are huge and it's expensive to wire them up correctly. If you want a very large number of pins, you have to increase the total die area, and think of things to fill it with.
 
I sit in a TSMC info session basically meant to push people to upgrade from 90nm to 65nm.

According to them, the speed increment is about 15% in general and costs about 15% more. Yield expectation given under most normal die size is ~75%.

From the street, nV got about 67~72 fully functional die per wafer. The wafer cost is about 3k (after volume discount and technology development agreement, etc). So, per die cost is about ~$44. They used ASE for FP-BGA packaging. I heard of figure like $16 ~ $25 per chips.
Testing cost is structure into the total packaging cost, I believe.

So, roughly the cost of the chip will then be $44 + $23 = $67

This is inline with what nV mentioned before that the packaging cost is quite significant and is about one third of the cost of the chip.

The other significant portion of the total cost of a graphic card is the Memory chips. GDDR3 costs $4 ~ $10 per chip depend on size, speed and volume. GDDR4 is a bit pricier.

Each 256bit card needs 8 of them.
$8 *8 = $64
and in 2900 case,
$8 * 16 = $108

As a result, it is no secret why mainstream card stay in 128bit for ext memory width.
 
Very constructive & insightful post gunblade :D
Seeing as you have some insight regarding packaging, you know what range Intels & AMD are in for X2 & C2D? With or without IHS. F and xeon differ greatly from these you think?

What about the PCB? I always thought it was a rather large part. The PCB for 8800gtx has 12 layers I believe, cant find a reference for gts.
 
Cost of IHS

I have been told that using an IHS can double the packaging cost, but surely that can't be true in a case such as this one, but more on the level of CPU package cost?
Is not using a IHS more of a fixed cost than one that scales (except for material cost with size)? So, in numbers and not %, what does it cost to use an IHS? Can't be all that much as PS3 use it for all the chips?

Would have edited & added this if I could :rolleyes:
 
Back
Top