Yes, no, yes, no, maybe yes.
Well, to answer your question a little more precisely, yield can be estimated before hand, especially if you're using standard cell logic, based on average yield, die size, process, etc.
Wafers have X number of particulates per sq inch, for example. As process size goes down, fatal particulate size goes up. But on the other hand, the smaller process fabs have higher particulate standards. Anyways, you can statistically model the number of failures based on that environmental information which is available and die size.
It could be process variation problems (we had 80% yield one wafer, ~0% the next on one project until the fab worked out their problems).
GENERALLY, chip design doesn't play too much into the yield unless you screwed up timing analysis, or you're depending on something that the process can't guarantee. The numbers the fabs give concerning timing modeling should represent worst case 'corners' of the process, so your part should function at their worst case. Analog stuff is different, since that's usually all custom 'logic' and its not really logic at all, but black magic voodoo (which my company apparently does relatively well, compared to others) One small addendum: companies aiming for high performance chips may push the limits of the fab, and design would become more important. We aim for low power, so timing isn't so much an issue for us.
If TSMC is only getting 10-20% yield on all their .13u stuff, I guess that's a good starting spot. However, I haven't heard anything that poor (if average yield was that bad, my company wouldn't even consider it and when .13u comes up in design meetings, nobody says "OMFG! yield is so bad, we can't use that!")