Cognitive science is the study of mental processes such as reasoning, memory, and the processing of perception. It is necessarily an interdisciplinary approach that includes fields such as psychology, linguistics, and neurology. The importance of the computer to cognitive science is that it offers a potential nonhuman model for a thinking entity. The attempts at artificial intelligence over the past 50 years have used the insights of cognitive science to help devise artificial means of reasoning and perception. At the same time, the models created by computer scientists (such as the neural network and Marvin Minsky’s idea of “multiple intelligent agents”) have in turn been applied to the study of human cognition (see Minsky, Marvin Lee and neural network).
Since the late 19th century, technological metaphors have been used to describe the human mind. The neurons and synapses of the brain were compared to the multi-tude of switches in a telephone company central office. The invention of digital computers seemed to offer an even more compelling correspondence between neurons and their elec-trochemical states and the binary state of a vacuum tube or transistor. It is only a small further step to assert that human mental processes can be reduced in principle to computa-tion, albeit a very complex tapestry of computation. Various schools of popular psychology and personal improvement have offered simplistic images of the human mind suffering from “bad programming” that can be debugged or manipu-lated through various processes. The simulation of some forms of reasoning and language construction by AI pro-
grams certainly suggests that there are fruitful analogies between human and machine cognition, but construction of a detailed model that would be applicable to both human and artificial intelligences seemed almost as distant in the science fictional year of 2001 as it was when Alan Turing and other AI pioneers first considered such questions in the early 1950s (see Turing, Alan Mathison).
Symbolists and Connectionists
Unlike standard computer memory cells, neurons can have hundreds of potential connections (and thus states). If a human being is a computer, it must be to a considerable extent an analog computer, with input in the form of levels of various chemicals and electrical impulses. Yet in the 1980s, Allen Newell and Herbert Simon suggested that the “output” of human mental experience can be effectively mapped as relationships between symbols (words, images, and so forth) that correspond to physical states (this is called the Physical Symbol System Hypothesis). If so, then such a symbol sys-tem would be “computable” in the Turing-Church sense (see computability and complexity). Working from the com-puter end, AI researchers have created a variety of programs that seem to “understand” restricted universes of discourse such as a table with variously shaped blocks upon it or “story frames” based upon common human activities such as eating in a restaurant. Thus, symbol manipulators can at least appear to be intelligent.
The “connectionists,” however, argue that it is not sym-bolic representations that are significant, but the structure within the mind that generates them. By designing neu-ral networks (or distributed processor networks) the con-nectionists have been able to create systems that produce apparently intelligent behavior (such as pattern recogni-tion) without any reference to symbolic representation.
Critiques have also come from philosophers. Herbert Dreyfus has pointed out that computers lack the body, senses, and social milieu that shape human thought. That machines can generate symbolic representations according to some sort of programmed rules doesn’t make the machine truly intelligent, at least not in the way experienced by human beings. John Searle responded to the famous Turing test (which states that if a human being can’t distinguish a computer’s conversation from a human’s, the computer is arguably intelligent). Searle’s “Chinese Room” imagines a room in which an English-speaking person who knows no Chinese is equipped with a program that lets him manipu-late Chinese words in such a way that a Chinese observer would think he knows Chinese. Similarly, Searle argues, the computer might act “intelligently,” but it doesn’t really understand what it is doing.
Advances in cognitive science will both influence and depend on developments in brain research (especially the connection between physical states and cognition) and in artificial intelligence.
No comments:
Post a Comment