Another one...really long/////// This one from ibm...sorry dont have the link as I saved these pages long ago!!!
It will have an impact on future of computing and gaming btw.....
The end of the road for Moore's Law?
By Eric J. Lerner
As the steady pace of improvements in integrated circuits begins to slow, IBM researchers are exploring ways to keep computer performance advancing.
--------------------------------------------------------------------------------
The computer industry is facing a period of fundamental change. For more than two decades, computer designers have relied on a remarkable manifestation of progress known as Moore's Law: the reliable tendency for the number of transistors on a chip to double every 18 to 24 months. The doubling occurs in part because the transistors can be made smaller. As they shrink, the transistors and hence the chips themselves become faster, which translates into faster computers. But industry experts agree that by around 2005 -- possibly sooner -- the trend defined by Moore's Law will start to slow. What is behind that slowdown and what will happen when the main engine driving improvements in computer performance is no longer available are crucial questions for our information society.
It is true, of course, that similar predictions in the past have proved overly pessimistic. In general, such forecasts were based on the idea that there were practical limits to continued progress. But it has long been known that, at some point, theoretical or fundamental physical limits would retard and ultimately put an end to further miniaturization. "Even if they can be made smaller," notes Randy Isaac, vice president for systems, technology and science research at IBM's Thomas J. Watson Research Center, "at some point, smaller transistors may not perform faster; in fact, their performance could even be worse, if they worked at all." The remarkable progress achieved by the semiconductor industry has now brought these fundamental limits into clear view. While the search continues for novel structures and materials to extend
conventional technology as long as possible, other techniques may play a greater role in continuing the improvements in computer performance.
Present-day logic and memory chips are based primarily on CMOS (complementary metal-oxide semiconductor) transistor technology. So-called scaling laws, first derived by IBM Fellow Robert Dennard and colleagues in 1972, provide guidelines for reducing the dimensions and voltage levels of an existing CMOS transistor in a coordinated way to produce a smaller, faster one. The three key scaling factors are the thickness of the insulator between the gate and the underlying silicon; the channel length (which is related to the minimum feature size); and the power-supply voltage, which, when applied to the gate, turns the transistor on.
The insulator, which is composed of silicon dioxide, is 2 to 3 nanometers (billionths of a meter) thick in present-day chips. Scaling requires that it become thinner, but it cannot be much less than about 1.5 nanometers thick. If the silicon dioxide layer is made any thinner, electrons can "tunnel" through it, creating currents that prevent the device from working properly. The limit on insulator thickness in turn limits the other dimensions of the transistor, as well as how fast it can switch. According to the latest CMOS technology "roadmap," published in November 1999 by the Semiconductor Industry Association, device designers would like to begin using a 1.5 nanometer insulator as early as 2001, although it may not be reached until 2004.
The channel length is defined as the distance between the source and drain, the two points between which a current flows during the operation of a transistor. The shorter the distance, the less time the transistor takes to switch, but various kinds of so-called short-channel effects suggest that the channel length cannot be much shorter than 25 nanometers, a size that could be attained by 2010 at the current rate of scaling.
The power-supply voltage level is perhaps an even more critical value, points out Tak Ning, an IBM Fellow and a leading researcher in CMOS technology. "The electronic nature of silicon requires a minimum level of about 1 volt. Appreciably less than that, and the transistor cannot produce enough current to switch on and off properly. At present, the voltage level is between 1.2 and 1.5 volts, which leaves room for about one more round of reduction, possibly by 2004."
These limits mean that future transistors will not be able to be made smaller and faster according to the time-tested scaling laws. Although modifications to the materials and structure of the transistor might keep progress going for some years -- as might cooling transistors to, say, -40 degrees C -- such tricks could be exhausted by as early as 2010 if progress continues at the current pace. When that day arrives, performance gains at the transistor could slow to a crawl.
What will the industry do then? Can computer performance continue to improve rapidly even without the help of faster transistors? Will entirely new technologies supplant silicon transistors? No one knows the answers. But many IBM researchers and developers are working to extend the reign of Moore's Law while preparing for the day when it no longer holds.
MOORE BY OTHER MEANS
One of the key measures of chip performance is the clock rate, the heartbeat of a chip. Although it is related to transistor speed, it is not wholly dependent on it. "More efficient circuit design might keep clock rates advancing into the 2010s," says Isaac. "In the past, the pace of change for transistors was so rapid we didn't have time to figure out how to use the devices optimally. Once we do, we can improve clock rates even when transistors don't get faster."
Nevertheless, because actual performance of tasks will continue to improve even after clock rates plateau -- perhaps around 5 to 10 gigahertz -- clock rate could lose relevance as a yardstick of computer speed. "We may," says Ning, "have to substitute other types of measures, such as overall system performance" -- that is, the rate at which a computer performs operations. "Performance will always drive the computer industry, but we'll be achieving that performance in different ways."
One likely strategy in the coming decades is to concentrate on improving performance at higher levels of the system than the individual transistors. "We're far from optimizing how the different parts of a computer -- the processor, the memory and input/ output system -- function together," comments Bijan Davari, IBM Fellow and vice president of semiconductor research and development. "And as progress at the device level slows down, a lot of the slack can be picked up by system integration."
In today's computers, logic functions and memory (except for a small amount of high-speed cache) are on separate chips, and those chips are sometimes on different boards. Communication across such distances creates delays and bottlenecks, as processes wait for data to arrive. Already, computer makers are working to move separate functions onto the same chip as the microprocessor. IBM chips that combine DRAM (dynamic random-access memory) and high-performance microprocessors will soon hit the market. Other functions will follow suit, perhaps until the entire computer is on a single chip or -- as design requirements dictate -- on several chips within the same package. "These system integration steps," says Davari, "might ultimately increase computer performance by as much as five times, even if device performance remains the same." Such advances would allow a system-level improvement to continue well into the 2010s.
Another way to boost system performance without increasing clock rate is through parallel processing. Today's supercomputers, like the IBM RS/6000® SP®, already achieve their high speeds (thousands of times greater than a
single processor) by linking together dozens, hundreds or even thousands of microprocessors. With dropping processor prices, the "multiprocessor could become practical even for the PC market," says Ning.
There will certainly be obstacles to multiprocessing. For one thing, the fastest processors are power-hungry, a problem that the addition of cooling systems will compound. While today's microprocessors may burn only about 20 watts, tomorrow's faster processors may burn 100. "The optimal solution," suggests Ning, "might be to use slower chips but more of them to get the most computer power per watt."
These approaches to improved performance underlie IBM's recent announcement of a five-year plan to build a computer, known as Blue Gene, that will be capable of performing 1 quadrillion floating-point operations per second (see "Cover story"). Among the keys to achieving Blue Gene's performance objectives are system integration and design innovations, in particular the use of streamlined chip architecture and on-chip DRAM.
BEYOND SILICON
At some point, however, perhaps around 2020, parallelism and other approaches to boosting computer power will themselves no longer be able to improve performance. When that ultimate limit of silicon technology is reached, one can only wonder, What lies beyond? That is not an idle question, since 20 years is a relatively short time for ideas to move from an exploratory research stage to widespread application. If there is to be a successor to silicon, chances are that it has at least been glimpsed.
Opinion on a successor is divided. Some researchers reject the possibility that an alternative to CMOS technology will emerge even in the long run. "There is no replacement for CMOS as we know it," declares Ning. "The basic building blocks for future computers will be digital CMOS." But other scientists, including Dennard, say they believe something new will probably come along, as it always has in the history of technology.
If a new technology does arise, it will have to overcome formidable obstacles, emphasizes Thomas Theis, director of physical sciences at IBM's Thomas J. Watson Research Center. "A replacement for silicon CMOS will be possible only once silicon technology truly starts to run out of steam," he says. "Anything that replaces it will have to be dramatically better in nearly every respect." That's because of the enormous investment the industry has already made in the infrastructure of silicon. "If history is any guide," Theis says, "a successor to CMOS will probably be developed to meet the needs of some niche market that doesn't yet exist, with customers that do not yet know they need such a thing."
Although there is no clear follow- on to CMOS, research efforts in the industry have taken on "a new sense of urgency," according to John E. Kelly III, general manager of IBM's Microelectronics Division. "With 15- to 20-year lead times between laboratory work and production, we have to get serious about new ideas," he asserts. Indeed, the Semiconductor Industry Association is sponsoring research initiatives to extend CMOS technology and to go beyond it.
For its part, IBM is pursuing at least three main lines of long-term research aimed at producing a successor technology. The first approach, closest to existing circuits, is based on novel materials and devices. One such device, called a Mott transition field-effect transistor, or MTFET, has been proposed by Dennis Newns at Watson and is being explored by a small team at the lab. It is based not on silicon but on compounds called perovskites. These materials have a wide range of unusual properties, such as high-temperature superconductivity and ferroelectricity, which contribute to their usefulness in novel device applications. The Mott transition, first studied by the Nobel physicist Sir Nevill Mott in the 1960s, is the special way in which some materials, such as perovskites, transform from insulators to metallic conductors as their electron density -- the number of electrons per atom -- is raised. That increase can be brought about in a variety of ways, and Newns has come up with a new one.
His idea is to use a certain kind of perovskite as the channel in an FET. In the normal, "off" state, the channel of an FET is nonconducting. Newns proposes to make the channel conducting -- that is, to make it undergo a Mott transition -- by applying a gate voltage, which is exactly how a typical silicon FET operates. If Newns's ideas prove correct, and if still another kind of perovskite is used as a gate insulator, it may be possible to create perovskite-based transistors that could be both substantially smaller and faster than ordinary CMOS transistors.
So far, though, only single examples of the MTFET have been produced. Many challenges must be met before integrated circuits based on this technology could be built at all, let alone surpass silicon chips in performance. Yet the preliminary results have been encouraging, and the underlying physical arguments in favor of pursuing such a program are sound.
LONGER-TERM THINKING
A second, more exotic approach being pursued at IBM is molecular electronics -- the use of individual molecules to act as switches in a computer. The idea originated with a 1974 proposal by Ari Aviram of Watson and Mark Ratner of Northwestern University to use a certain kind of molecule to pass electric current primarily in one direction, a function performed in electronics by a diode. A team at the University of Alabama succeeded in doing so in 1997. Today, perhaps the best-researched approach to molecular electronics is carbon nanotubes -- cylinders of hexagonally arranged carbon atoms 1 to 2 nanometers wide. Nanotubes can be semiconductors or metals depending on their detailed atomic structure.
In 1998, Phaedon Avouris and his colleagues at Watson demonstrated that a field-effect transistor could be built out of a 1.4-nanometer-wide nanotube bridging a gap between two gold electrodes over a silicon substrate. Among other advantages of nanotubes, Avouris points out, "they can carry huge current densities -- a billion amperes per square centimeter -- without breaking down as normal metals do." That makes them ideal materials for interconnections as well as transistors.
But, like MTFETs, nanotubes must clear many hurdles before they can be used for practical devices. "For one thing," says Avouris, "we don't yet know how to make nanotubes with identical structures. And for commercial applications, we need a different way of assembling the devices from the way we have used in the laboratory." The method Avouris is exploring: self-assembly. His group is also participating in a DARPA-funded effort to use self-assembly to fabricate molecular logic gates based on molecules even smaller than nanotubes.
Joe Jasinski, manager of nanoelectronics and optical science at Watson, explains the rationale behind the pursuit of self-assembly. "If you want to build just one patio, you can take the time to lay bricks one by one," he says. "But if you want to build a trillion identical patios, you need to get the bricks to somehow do the job themselves, to self-assemble into patios. We don't know how to get bricks to self-assemble, because there are no natural forces between bricks that are strong enough compared to the size and weight of the bricks to have an effect. The situation is different for nanometer scale objects." Certain molecules, for example, can self-assemble into quite complex arrays, governed just by the physical forces acting between them. In a recent step forward, IBM scientists Cherie Kagan, David Mitzi and Christos Dimitrakopoulos induced a mixture of organic and inorganic compounds to crystallize from a solution and self-assemble into alternating organic and inorganic layers that can be patterned to form the channels of thin-film FETs.
A group led by Jim Gimzewski at IBM's Zurich Research Laboratory is attempting another approach to molecular computation, one based on machines instead of electronics. "We are attempting to follow physicist Richard Feynman's program of working from the bottom up, using chemistry to design molecules that can do calculations," he says. Such molecules would act as nanomachines. Gimzewski and his colleagues are working on a nanomachine in which electrons would be transported from one electrode to another through molecular wires connected to a tiny molecular ring containing a still smaller molecular rotor. By applying an appropriate voltage, electrons could be directed through the wires and ring to the rotor, which would then rotate before releasing the electron to another electrode, thereby acting as a kind of switch. Although the work is still at a very preliminary stage -- the molecular rotor system exists, but a fully functional device has not yet been built -- Gimzewski has high expectations. "We see this as a potential technology for general-purpose computers," he says.
To link up tiny molecular nanomachines, the Zurich team is developing ways to use atomic-force-microscope (AFM) technology to create extremely thin wires. An AFM senses the atomic contours of a surface with a tiny silicon tip attached to a delicate cantilever, which is kept at a constant distance from the surface when scanned over it. This same cantilever can be used as a stencil, allowing extremely fine lines to be directly deposited through arrays of tiny apertures cut into the cantilever. While a single AFM cantilever might work slowly, other AFM-based projects at Zurich have shown that large arrays of cantilevers can operate in parallel, enabling rapid processing.
Another long-term prospect is quantum computing, which would take advantage of new types of calculations that exploit the esoteric effects of quantum mechanics. It has been shown that such effects could speed up the solutions to some problems -- for example, factoring large numbers and searching an unordered database -- by many orders of magnitude. Quantum computers could also be used to simulate complex quantum-mechanical systems and provide insight into the workings of quantum mechanics itself. In the past few years, simple models of such computers have been shown to operate in the laboratory (see IBM Research, Number 4, 1998, "A quantum leap for computing").
Still more radical concepts may come along. "After all," Davari points out, "we know that brains don't operate on the same principles as a digital computer, and yet they do a fantastic job. Even a mosquito brain, which is much smaller than a pinhead, can control navigation, vision, smell, mating, feeding and other functions." While the principles on which the human brain operates are far from understood, it's possible that, as research gradually reveals the underlying mechanisms, fundamentally new ideas for computing could emerge.
So where are computers going after Moore's Law? For probably another decade past 2010, the industry will be trying to push performance through system integration, and better exploitation of parallel processing and other architectural innovations. Beyond that, either the industry will reach a period of maturity and slow growth in performance, or revolutionary new ideas will move out of the laboratory and into production.