What you mean by tolerances? There are blocks in a chip that have a lot of timing margin and others (most) that don't. But it doesn't really matter: shrinks almost always just work.
(Not clear what tolerances have to do with a chip being pad limited?)
All sorts of tolerances. Timing, interference, capacitance, leakage, etc.
If a chip is pad limited, you could move circuits further apart and not lose much, which would make the chip more tolerant to changes in relative component size, as well as more tolerant to electrical interference. If you don't care much about the clock speed, then you also have much larger timing tolerances, obviously.
The time of pure optical shrinks of everything is long gone. Analog blocks either don't shrink or you don't want them to to reduce risk or schedule. So you need to do some rework on your chip floor plan anyway. Pretty much everything after the initial netlist is different. Reusing existing wires or cell placement is a non-starter.
Wasn't aware of that. I figured that at least some components could be more-or-less copied over, as long as the tolerances on those components were relatively loose. But if today's chip layout tools are advanced enough, then this may not even be of any benefit.
Not really an issue for a shrink: cell timing are usually slightly better and, more important, the timing constraints have already been fully debugged. So you get smoother flow.
Hmmm. I would have expected that the differences in relative size make it so that if you want to get the most out of a die-shrunk architecture, you have to rework the timing to the new process. Naturally when you do a die shrink, timing in general gets easier, so you should have no problem running at the same or even a slightly higher clock speed. But there is likely a fair amount of work in getting the die-shrunk chip to operate at its optimum.
In terms of total farm time, probably 95% is spent on digital simulations. It's still the place that's responsible for most of the bugs. Process technology is irrelevant here.
Even if there's a bug that's process related, it's pretty much always due to analog designer mistakes and not really process related. If someone designs a DAC in 65nm and it works, by pure luck, in a condition that wasn't simulated. Then he ports it to 55nm, but this time it fails in that corner. Is that a case of "they're having problems with 55nm technology" or just a testing hole?
Well, I was assuming that any chip design is going to go through a huge gauntlet of software tests before it reaches production. This *should* leave only model inaccuracies and layout bugs that could cause problems once a design has been sent off to the fab.
Only for a really immature process, in which case the fab will send updated Spice decks and libraries with revised characteristics (somehow always slower than initially thought.) Annoying, but hardly ever a concern.
But isn't that precisely the situation we're talking about? I mean, with 3D graphics hardware, aren't they working with the fab during the initial stages of application of a brand-new process, in order to get out their spiffy new chips on the smaller process as fast as possible? Sure, this should definitely not be an issue with any part which starts production when a process is already mature, but I have to imagine that for companies like ATI and nVidia that try to get the smaller parts out ASAP, they would probably have to deal with these issues quite frequently.
Not in the way I think you meant it. The best estimator is the amount of unique (non-repetitive) placed gates. Larger chip. More gates. More effort, but largely linear.
It depends upon what you're talking about, whether or not the issue is linear. Detecting bugs in the layout should be more-or-less linear, provided that they've effectively compartmentalized the layout of a complex chip. The non-linear aspect comes in more with respect to the tolerances on all of the various components of the chip. That is to say, every time the fab writes a circuit to the silicon, there is going to be some amount of variance as to the precise layout. If you have a small, simple chip, then you can tolerate a relatively large error rate and still have lots of good chips. If your chip is large and complex, on the other hand, then you need a vastly lower error rate. So you might design a large, complex chip with much wider tolerances than you would design a smaller, simpler one.
Where this has application to what I was talking about above is with respect to detecting issues with the software model of the process, which, as you mention, would only happen with a very immature process. Each error in the software model would have some probability of becoming apparent in some sort of design layout. The more complex your chip is, the larger space of possible layouts you'll explore, and thus the more likely you are to hit on an error in the model of the process.