Process technology woes (general)

Very interesting. Made me think about an asynchronous design paper I saw recently. We will likely need to move to asynchronous designs in order to go much below 0.09um.
 
If this paper is relevant to nVidia's problems then it might be more of a question of getting the design right rather than problems with TSMC--means ATI won't necessarily have an easier time when they tackle 0.13 microns
 
it might be more of a question of getting the design right rather than problems with TSMC--means ATI won't necessarily have an easier time when they tackle 0.13 microns

Or...it might mean that if ATI has better designers, they'll have an easier time of it. ;)
 
So maybe the usual broblem of making sure that your clock tree is in sync over the chip has been inverted. And now you have to make sure that the clocks of different parts is phase shifted (to distribute "clock spikes"). And of course adding logic to compensate for the phase differences.

An asynchronous design could still have most gates switching at the same time. Statistically speaking it might not be so often, but to be sure it never happens you'd need to run different parts at fixed phase differences.
 
Funny, at work a coworker and I we were mulling over our current chip and speculating about this exact topic and the solution to run the different pieces out of phase to prevent severe Vdd droop and brownout. Neither one of us read this article.
 
antlers4 said:
If this paper is relevant to nVidia's problems then it might be more of a question of getting the design right rather than problems with TSMC--means ATI won't necessarily have an easier time when they tackle 0.13 microns

This is what I have been trying to say for a while. Look at the Athlon T-bred. I believe AMD has some smart engineers, but it still took a redesign to get it to scale in clock speed. A fairly major shift from 6 to 7 layers and some rearrangement. The shift to .13 micron is more than just smaller gates and process.

While on that subject, if nVidia had to wait for TMSC to get 130 nm machines online, that would imply TMSC has a fairly limited set. Is it possible that nVidia could essentially tie-up those machines with orders keeping an ATI chip from getting to the production line? Or forcing TMSC to retool thus slowing down production?
 
DadUM said:
This is what I have been trying to say for a while. Look at the Athlon T-bred. I believe AMD has some smart engineers, but it still took a redesign to get it to scale in clock speed. A fairly major shift from 6 to 7 layers and some rearrangement. The shift to .13 micron is more than just smaller gates and process.

While on that subject, if nVidia had to wait for TMSC to get 130 nm machines online, that would imply TMSC has a fairly limited set. Is it possible that nVidia could essentially tie-up those machines with orders keeping an ATI chip from getting to the production line? Or forcing TMSC to retool thus slowing down production?

Only if TSMC wants to lose business to UMC.
 
Or perhaps ATI simply has more time to move to .13 and perfect their designs, better or not... they have a lot of good parts on the market right now, do you think they're just sitting there right now?

no... you can be sure that ATI is working out their .13 move. They just don't have the do or die issue with .13 Nvidia does by not having parts (in the majority of markets) for this generation...
 
IIRC, one of the changes AMD did to Tbred to make it run faster was to add a load of small capacitors into the die itself - increasing both area and number of metal layers in the process.

Fully asynchronous logic appears to be about an order of magnitude harder to design than standard synchronous logic, but deliberately running clocks out of phase is a technique we are already beginning to see - IIRC, Intel claimed to be able to squeeze about 10% extra clock speed out of the Willamette P4 by running clocks to different parts of the chip slightly out of phase.
 
Back
Top