Chartered and Microsoft Sign 65-nm Manufacturing Agreement for Xbox 360 CPU

Shifty Geezer said:
May I ask, if 65nm is more costly than 90nm, what causes the price of 65nm to drop and who uses it until then? I though costs reduce as the systems develop, based on using them, so if no-ne used 65nm it's price would stay high (?). If 65nm costs more than is it only cutting-edge technology that needs the smaller process for speed/heat issues who use 65nm?

People talk of the price decreasing over time, but it's not like if Sony build a 65nm fab that'll produce chips at $100 each, and then don't use it for a year, when they return it'll produce chips at $50 each! So what exactly are the changes that happen to decrease costs?

Most likely culprit would be economies of scale. "eventually" lots of chips would use 65nm so that brings the price down. And many appliances use microchips, not only consoles and things we discuss here, so that helps too. Mobile phones come to mind for example, and that is a huge market hungry for more powerful portable chips that run faster but also cooler.
 
Shifty Geezer said:
So what exactly are the changes that happen to decrease costs?

Since the chips are smaller, there are more that fit onto the waver which in turn also yields a lower chip defect rate... :idea: With smaller cheaps, also lower voltage can be used, which in turns lowers the temperature and as a result puts less of a pressure onto other units that are there to make sure the thing doesn't run as hot...

EDIT: Appologies if the question was misunderstood Shifty, I didn't read the rest of the thread. :oops: I still assume it should answer the question and why companies opt to go for lower processes. It's the variable costs that cost a lot of money over time and when you're producting x millions units a month, moving to a smaller process can be pay off quite quickly.
 
Last edited by a moderator:
Phil said:
Since the chips are smaller, there are more that fit onto the waver which in turn also yields a lower chip defect rate...
Perhaps a better question is "What improves yields over time if not practice?" As I understand it, the new process has more chips per wafer but these have lower yields to begin with, and the cost of the chips are higher. To get better yields and bring the cost down below those of the larger process, aren't lots of costly chips going to need to be made first? In the context of Sony using 65nm in PS3, if they don't use the 65nm process because costs are higher to begin with, how will the process ever mature to realize the benefits over 90nm?
 
Even without volume production, the cost of a manufacturing process decreases with time because it's continually being optimized for better yeilds using pilot and sample production.
 
NANOTEC said:
I think it's reasonable. Notice I said before the end of next year which is end of 2007. Since the eDRAM is manufactured by NEC and on a separate die, it doesn't have to use the same 65nm process as the mother die in 2007. NEC plans on using 55nm soon after 65nm instead of waiting for 45nm. Who knows they might even make it a single die for both mother/daugther dies using 65nm. Is that possible at 65nm?

By no means am I a foundry-tech expert. But I'm guessing NEC's e-DRAM would have to be 'ported' to Chartered/IBM's process -- a design-activity NEC obviously WON'T assist. My coworker attended an NEC/ASIC presentation (evaluation for an upcoming design kickoff.) NEC's representative surprised everyone in the room by admitting their e-DRAM cell-size was *larger* than TSMC's 1T-SRAM, but their e-DRAM had a somewhat faster cycle-time (i.e. clock-freq.) Offhand, I don't recall TSMC's 1T-SRAM@90nm freq -- probably in the 400MHz range. (Someone correct me if I'm wrong.)

Perhaps the global performance-improvement in moving from 90nm -> 65nm is enough to cover the spread...

The other major obstacle is the additional mask-steps required for e-DRAM fabrication. Well it's not so much an obstacle, rather an added expense (with potential impact on yield.)
 
Back
Top