5 months ago, I made a Monte Carlo simulation model of a GPU to estimate the die cost of a GPU based on redundancy and defect density. I then proceeded to use this information to argue the financial viability of a chip like GF100, on the semiaccurate forums, no less.
Turned out that some actor over there didn't appreciate the results. He also didn't like insinuations that Jen-Hsun Huang stole his girlfriend. (Replying in kind is fair game after being called a troll for no good reason, IMHO.)
I've since converted the model from Perl to html/javascript and made it more user friendly.
You can find the code here: https://raw.github.com/gist/1519381/df04661a01b622132e103418cba705226456ded6/yield.html
To use, just save that file locally to your drive and then open the file with your browser. Select your GPU of interest, change some numbers, and press the 'Go' link to simulate. It will take only a few seconds.
I used a GF100 die shot and MS Paint to estimate some areas. I didn't do that for RV* chips and just guessed. I'd appreciate if others could fill in, or at least provide links to die shots.
The program operation is simple: it first divides the chip in buckets with logic functions and defect tolerance. Then it drops defects randomly on the wafer. Each defect will increase the fault count in 1 bucket on the wafer. After that, it gathers results: it will make buckets defective if the number of defects for a particular buckets is larger than its fault tolerance. In the final step, for each die, it checks if the die can serve as a valid product.
E.g. for a GF100, it will categorize a die as a GTX470 is it has 2 or less defect shader cores and 1 or less defect memory controller.
It will do this for (default) 500 wafers and then averages the results to get the final yield.
Bugs reports, comments, feedback are welcome!
Turned out that some actor over there didn't appreciate the results. He also didn't like insinuations that Jen-Hsun Huang stole his girlfriend. (Replying in kind is fair game after being called a troll for no good reason, IMHO.)
I've since converted the model from Perl to html/javascript and made it more user friendly.
You can find the code here: https://raw.github.com/gist/1519381/df04661a01b622132e103418cba705226456ded6/yield.html
To use, just save that file locally to your drive and then open the file with your browser. Select your GPU of interest, change some numbers, and press the 'Go' link to simulate. It will take only a few seconds.
I used a GF100 die shot and MS Paint to estimate some areas. I didn't do that for RV* chips and just guessed. I'd appreciate if others could fill in, or at least provide links to die shots.
The program operation is simple: it first divides the chip in buckets with logic functions and defect tolerance. Then it drops defects randomly on the wafer. Each defect will increase the fault count in 1 bucket on the wafer. After that, it gathers results: it will make buckets defective if the number of defects for a particular buckets is larger than its fault tolerance. In the final step, for each die, it checks if the die can serve as a valid product.
E.g. for a GF100, it will categorize a die as a GTX470 is it has 2 or less defect shader cores and 1 or less defect memory controller.
It will do this for (default) 500 wafers and then averages the results to get the final yield.
Bugs reports, comments, feedback are welcome!