Search This Blog

Sunday, 20 October 2013

artificial intelligence

The development of the modern digital computer follow-ing World War II led naturally to the consideration of the ultimate capabilities of what were soon dubbed “thinking machines” or “giant brains.” The ability to perform cal-culations flawlessly and at superhuman speeds led some observers to believe that it was only a matter of time before the intelligence of computers would surpass human levels. This belief would be reinforced over the years by the devel-opment of computer programs that could play chess with increasing skill, culminating in the match victory of IBM’s Deep Blue over world champion Garry Kasparov in 1997. (See chess and computers.)

However, the quest for artificial intelligence would face a number of enduring challenges, the first of which is a lack of agreement on the meaning of the term intelligence, particularly in relation to such seemingly different entities as humans and machines. While chess skill is considered a sign of intelligence in humans, the game is deterministic in that optimum moves can be calculated systematically, limited only by the processing capacity of the computer. Human chess masters use a combination of pattern recogni-tion, general principles, and selective calculation to come

up with their moves. In what sense could a chess-playing computer that mechanically evaluates millions of positions be said to “think” in the way humans do? Similarly, com-puters can be provided with sets of rules that can be used to manipulate virtual building blocks, carry on conversations, and even write poetry. While all these activities can be per-ceived by a human observer as being intelligent and even creative, nothing can truly be said about what the computer might be said to be experiencing.

In 1950, computer pioneer Alan M. Turing suggested a more productive approach to evaluating claims of artifi-cial intelligence in what became known as the Turing test (see Turing, Alan). Basically, the test involves having a human interact with an “entity” under conditions where he or she does not know whether the entity is a computer or another human being. If the human observer, after engag-ing in teletyped “conversation” cannot reliably determine the identity of the other party, the computer can be said to have passed the Turing test. The idea behind this approach is that rather than attempting to precisely and exhaustively define intelligence, we will engage human experience and intuition about what intelligent behavior is like. If a com-puter can successfully imitate such behavior, then it at least may become problematic to say that it is not intelligent.

Computer programs have been able to pass the Tur-ing test to a limited extent. For example, a program called ELIZA written by Joseph Weizenbaum can carry out what appears to be a responsive conversation on themes chosen by the interlocutor. It does so by rephrasing statements or providing generalizations in the way that a nondirec-tive psychotherapist might. But while ELIZA and similar programs have sometimes been able to fool human inter-locutors, an in-depth probing by the humans has always managed to uncover the mechanical nature of the response.

Although passing the Turing test could be considered evidence for intelligence, the question of whether a com-puter might have consciousness (or awareness of self) in the sense that humans experience it might be impossible to answer. In practice, researchers have had to confine them-selves to producing (or simulating) intelligent behavior, and they have had considerable success in a variety of areas.

Top-Down Approaches
The broad question of a strategy for developing artificial intelligence crystallized at a conference held in 1956 at Dart-mouth College. Four researchers can be said to be founders of the field: Marvin Minsky (founder of the AI Laboratory at MIT), John McCarthy (at MIT and later, Stanford), and Her-bert Simon and Allen Newell (developers of a mathematical problem-solving program called Logic Theorist at the Rand Corporation, who later founded the AI Laboratory at Carn-egie Mellon University). The 1950s and 1960s were a time of rapid gains and high optimism about the future of AI (see

Minsky, Marvin and Mccarthy, John).

Most early attempts at AI involved trying to specify rules that, together with properly organized data, can enable the machine to draw logical conclusions. In a production system the machine has information about “states” (situations) plus rules for moving from one state to another—and ultimately,
artificial intelligence        27

to the “goal state.” A properly implemented production sys-tem cannot only solve problems, it can give an explanation of its reasoning in the form of a chain of rules that were applied.

The program SHRDLU, developed by Marvin Minsky’s team at MIT, demonstrated that within a simplified “micro-world” of geometric shapes a program can solve problems and learn new facts about the world. Minsky later developed a more generalized approach called “frames” to provide the computer with an organized database of knowledge about the world comparable to that which a human child assimi-lates through daily life. Thus, a program with the appropri-ate frames can act as though it understands a story about two people in a restaurant because it “knows” basic facts such as that people go to a restaurant to eat, the meal is cooked for them, someone pays for the meal, and so on.

While promising, the frames approach seemed to founder because of the sheer number of facts and relationships needed for a comprehensive understanding of the world. During the 1970s and 1980s, however, expert systems were developed that could carry out complex tasks such as deter-mining the appropriate treatment for infections (MYCIN) and analysis of molecules (DENDRAL). Expert systems combined rules of inference with specialized databases of facts and relationships. Expert systems have thus been able to encapsulate the knowledge of human experts and make it available in the field (see expert systems and knowledge representation).


The most elaborate version of the frames approach has been a project called Cyc (short for “encyclopedia”), devel-oped by Douglas Lenat. This project is now in its third decade and has codified millions of assertions about the world, grouping them into semantic networks that repre-sent dozens of broad areas of human knowledge. If success-ful, the Cyc database could be applied in many different domains, including such applications as automatic analysis and summary of news stories.

Bottom-Up Approaches
Several “bottom-up” approaches to AI were developed in an attempt to create machines that could learn in a more humanlike way. The one that has gained the most prac-tical success is the neural network, which attempts to emulate the operation of the neurons in the human brain. Researchers believe that in the human brain perceptions or the acquisition of knowledge leads to the reinforcement of particular neurons and neural paths, improving the brain’s ability to perform tasks. In the artificial neural network a large number of independent processors attempt to perform a task. Those that succeed are reinforced or “weighted,” while those that fail may be negatively weighted. This leads to a gradual improvement in the overall ability of the sys-tem to perform a task such as sorting numbers or recogniz-ing patterns (see neural network).

Since the 1950s, some researchers have suggested that computer programs or robots be designed to interact with their environment and learn from it in the way that human infants do. Rodney Brooks and Cynthia Breazeal at MIT have created robots with a layered architecture that includes

motor, sensory, representational, and decision-making ele-ments. Each level reacts to its inputs and sends information to the next higher level. The robot Cog and its descendant Kismet often behaved in unexpected ways, generating com-plex responses that are emergent rather than specifically programmed.

The approach characterized as “artificial life” adds a genetic component in which the successful components pass on program code “genes” to their offspring. Thus, the power of evolution through natural selection is simulated, leading to the emergence of more effective systems (see artificial life and genetic algorithms).


In general the top-down approaches have been more successful in performing specialized tasks, but the bottom-up approaches may have greater general application, as well as leading to cross-fertilization between the fields of arti-ficial intelligence, cognitive psychology, and research into human brain function.

Application Areas

While powerful artificial intelligence is not yet ubiquitous in everyday computing, AI principles are being successfully used in a number of application areas. These areas, which are all covered separately in this book, include

•  devising ways of capturing and representing knowl-edge, making it accessible to systems for diagnosis and analysis in fields such as medicine and chemistry (see knowledge representation and expert systems)

•  creating systems that can converse in ordinary lan-guage for querying databases, responding to customer service calls, or other routine interactions (see natu-ral language processing)

•  enabling robots to not only see but also “understand” objects in a scene and their relationships (see com-puter vision and robotics)

•  improving systems for voice and face recognition, as well as sophisticated data mining and analysis (see speech recognition and synthesis, biometrics, and data mining)

•  developing software that can operate autonomously, carrying out assignments such as searching for and evaluating competing offerings of merchandise (see software agent)

Prospects

The field of AI has been characterized by successive waves of interest in various approaches, and ambitious projects have often failed. However, expert systems and, to a lesser extent, neural networks have become the basis for viable products. Robotics and computer vision offer a significant potential payoff in industrial and military applications. The creation of software agents to help users navigate the com-plexity of the Internet is now of great commercial interest. The growth of AI has turned out to be a steeper and more complex path than originally anticipated. One view sug-gests steady progress. Another, shared by science fiction



No comments:

Post a Comment