Black Diamond advantages over FSG?

Arun

Unknown.
Moderator
Legend
Hey,

I've been trying to find some info on this, but Google mostly gave me lame press releases which describe the problems the companies are facing and saying it allows for "high performance"...

So, what I'd like to know is how much of an advantage Black Diamond haves over FSG ( GFFX is manufactured on TSMC FSG-based 0.13-micron technology AFAIK )

The only interesting thing I found is:
http://www.chipcenter.com/asic/products_200-299/prod207.html

TSMC plans to offer both the FSG and the CVD-produced Black Diamond low-k (2.9) insulator with its 0.13-micron all copper process. According to the company, the Black diamond dielectric will yield a 22% improvement in RC delay over FSG in the 0.13-micron process. TSMC has run pilot lots with both dielectrics at 0.13 micron on 300mm wafers.

"22% improvement in RC delay" - does that result in a 22% clock rate increase if both process are as mature? Or is it less than that?


Thanks for reading,


Uttar
 
RC delay is interconnect delay and does not count delay within the gates themselves. From '22% improved' RC delay, I would estimate the actual clock speed increase to be on the order of 5% to 15%, dependent on how interconnect limited the circuit design in question is.
 
arjan de lumens said:
RC delay is interconnect delay and does not count delay within the gates themselves. From '22% improved' RC delay, I would estimate the actual clock speed increase to be on the order of 5% to 15%, dependent on how interconnect limited the circuit design in question is.

Hmm, I see. Interesting.
But considering RC delay would be 22% improved, couldn't nVidia try to take advantage of that by making the design more interconnect limited? So that the performance advantage, if you redesign with that in mind, is of about 20%?


Uttar
 
In the normal chip design flow, gate delay is fairly easy to control, as excessive gate delays can always be fixed by just fixing the HDL code for the affected circuits. Interconnect delays are determined at a much later stage in the chip design flow (place & route) and are as such harder to control accurately - the only thing you can really do about it is to make a guess at which signals have the worst interconnect delays and isolate the part with the large delay, like the 'drive' stages in the Pentium4 pipeline do. While you can always 'optimize' a design to be 100% interconnect limited by putting a single excessively long wire somewhere in the design, it rarely makes any sense to actually do so intentionally (although with careless design, it it easy to do it accidentally, in which case you usually fail to reach the clock speed target of the design).

Given the high clock speeds of R300 and NV30, I suspect that substantial effort has already gone into isolating/buffering the worst interconnects - if this is so, low-k dielectrics will have a rather small effect (~10% or less) on the clock speeds of these designs.
 
arjan de lumens said:
Given the high clock speeds of R300 and NV30, I suspect that substantial effort has already gone into isolating/buffering the worst interconnects - if this is so, low-k dielectrics will have a rather small effect (~10% or less) on the clock speeds of these designs.

It seems like the benefit may be higher if the design was made for low-k dielectrics in the first place. If this is the case for the NV30, then it may be suffering from some imbalancing in its delays (or, perhaps, rebalancing those delays is what caused a decent part of the delay).
 
Back
Top