As early as the 1930s, researchers had begun to investi-gate the electrical properties of materials such as silicon
and germanium. Such materials, dubbed “semiconductors,” were neither a good conductor of electricity (such as cop-per) nor a good insulator (such as rubber). In 1939, one researcher, William Shockley, wrote in his notebook “It has today occurred to me that an amplifier using semiconduc-tors rather than vacuum [tubes] is in principle possible.” In other words, if the conductivity of a semiconductor could be made to vary in a controlled way, it could serve as an electronic “valve” in the same way that a vacuum tube can be used to amplify a current or to serve as an electronic switch.
The needs of the ensuing wartime years made it evi-dent that a solid-state electronic device would bring many advantages over the vacuum tube: compactness, lower power usage, higher reliability. Increasingly complex elec-tronic equipment, ranging from military fire control sys-tems to the first digital computers, further underscored the inadequacy of the vacuum tube.
In 1947, William Shockley, along with John Bardeen and Walter Brattain, invented the transistor, a solid-state electronic device that could replace the vacuum tube for most low-power applications, including the binary switch-ing that is at the heart of the electronic digital computer. But as the computer industry strove to pack more process-ing power into a manageable volume, the transistor itself began to appear bulky.
Starting in 1958, two researchers, Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconduc-tor, independently arrived at the next stage of electronic miniaturization: the integrated circuit (IC). The basic idea of the IC is to make semiconductor resistors, capacitors, and diodes, combine them with transistors, and assemble them into complete, compact solid-state circuits. Kilby did this by embedding the components on a single piece of ger-manium called a substrate. However, this method required the painstaking and expensive hand-soldering of the tiny gold wires connecting the components. Noyce soon came up with a superior method: Using a lithographic process, he was able to print the pattern of wires for the circuit onto a board containing a silicon substrate. The components could then be easily connected to the circuit. Thus was born the ubiquitous PCB (printed circuit board). This technology would make the minicomputer (a machine that was roughly refrigerator-sized rather than room-sized) possible during the 1960s and 1970s. Besides the PCBs being quite reli-able compared to hand-soldered connections, a failed board could be easily “swapped out” for a replacement, simplify-ing maintenance.
From IC to Chip
The next step to the truly integrated circuit was to form the individual devices onto a single ceramic substrate (much smaller than the printed circuit board) and encapsulate them in a protective polymer coating. The device then func-tioned as a single unit, with input and output leads to con-nect it to a larger circuit. However, the speed of this “hybrid IC” is limited by the relatively large distance between com-ponents. The modern IC that we now call the “computer chip” is a monolithic IC. Here the devices, rather than being attached to the silicon substrate, are formed by altering the substrate itself with tiny amounts of impurities (a process called “doping”). This creates regions with an excess of electrons (n-type, for negative) or a deficit (p-type for posi-tive). The junction between a p and an n region functions as a diode. More complex arrangements of p and n regions form transistors. Layers of transistors and other devices can be formed on top of one another, resulting in a highly com-pact integrated circuit. Today this is generally done using optical lithography techniques, although as the separation between components approaches 100 nm (nanometers, or billionths of a meter) it becomes limited by the wavelength of the light used.
In computers, the IC chip is used for two primary func-tions: logic (the processor) and memory. The microproces-sors of the 1970s were measured in thousands of transistor equivalents, while chips such as the Pentium and Athlon being marketed by the late 1990s are measured in tens of millions of transistors (see microprocessor). Mean-while, memory chips have increased in capacity from the 4K and 16K common around 1980 to 256 MB and more. In what became known as “Moore’s law,” Gordon Moore has observed that the number of transistors per chip has doubled roughly every 18 months.
Future Technologies
Although Moore’s law has proven to be surprisingly resil-ient, new technologies will be required to maintain the pace of progress.
In January 2007, Intel and IBM separately announced a process for making transistors out of the exotic metal haf-nium. It turns out that hafnium is much better than the tra-ditional silicon at preventing power leakage (and resulting inefficiency) through layers that are only about five atoms thick. Hafnium transistors can also be packed more closely together and/or run at a higher speed.
Another approach is to find new ways to connect the transistors so they can be placed closer together, allow-ing signals to travel more quickly and thus provide faster operation. Hewlett-Packard (HP) is developing a way to place the connections on layers above the transistors them-selves, thus reducing the space between components. The scheme uses two layers of conducting material separated by a layer of insulating material that can be made to conduct by having a current applied to it. Although promising, the approach faces difficulties in making the wires (only about 100 atoms thick) reliable enough for applications such as computer memory or microprocessors.
Ultimately, direct fabrication at the atomic level (see nanotechnology) will allow for the maximum density and efficiency of computer chips.
No comments:
Post a Comment