Search This Blog

Tuesday, 22 October 2013

chess and computers

With simple rules but endless permutations, chess has fas-cinated millions of players for hundreds of years. When mechanical automatons became fashionable in the 18th century, onlookers were intrigued by “the Turk,” a chess-playing automaton. While the Turk was eventually shown to be a hoax (a human player was hidden inside), the devel-opment of the electronic digital computer in the mid-20th century provided the opportunity to create a true automatic chess player.

In 1950 Claude Shannon outlined the two basic strate-gies that would be used by future chess-playing programs. The “brute force” strategy would examine the possible moves for the computer chess player, the possible replies of the opponent to each move, the possible next moves by the computer, and so on for as many half moves or “plies” as possible. The moves would be evaluated by a “minimax” algorithm that would find the move that best improves the computer’s position despite the opponent’s best play.

The fundamental problem with the brute force is the “combinatorial explosion”: Looking ahead just three moves (six plies) would involve evaluating more than 700,000,000 positions. This was impractical given the limited comput-ing power available in the 1950s. Shannon realized this and decided that a successful chess program would have to incorporate principles of chess strategy that would enable it to quickly recognize and discard moves that did not show a likelihood of gaining material or improving the position (such as by increasing control of center squares). As a result of this “pruning” approach, only the more promising initial moves would result in the program looking ahead—but those moves could be analyzed much more deeply.

The challenge of the pruning approach is the need to identify the principles of good play and codify them in such a way that the program can use them reliably. Progress was slow at first—programs of the 1950s and 1960s could scarcely challenge an experienced amateur human player, let alone a master. A typical program would play a mix-ture of reasonable moves, odd-looking but justifiable moves, and moves that showed the chess version of “nearsighted-ness.” By the 1970s, however, computing power was rapidly increasing, and a new generation of programs such as Chess 4.0 from Northwestern University abandoned most pruning techniques in favor of brute-force searches that could now extend further ahead. In practice, each programmer chose a particular balance between brute force and pruning-selection techniques. An ever-increasing search base could be com-bined with evaluation of particularly important positional features (such as the possibility of creating a “passed pawn” that could be promoted to a queen).

By the end of the 1970s, International Master David Levy was still beating the best chess programs of the time (defeating Chess 4.7 in 1978). A decade later, however, Levy was defeated in 1989 by Deep Thought, a program that ran on a specially designed computer that could examine hundreds of millions of positions per move. That same year World Champion Garry Kasparov decisively defeated the machine. In 1996, however, the successor program Deep Blue (sponsored by IBM) shocked the chess world by beat-ing Kasparov in the first game of their match. Kasparov went on to win the match, but the following year an updated version of Deep Blue defeated Kasparov 3 1/2–2 1/2. A com-puter had arguably become the strongest chess player in the world. As a practical matter, the match brought IBM invalu-able publicity as a world leader in supercomputing.

Chess and AI

The earliest computer chess theorists such as Claude Shan-non and Alan Turing saw the game as one potential way to demonstrate true machine intelligence. Ironically, by the time computers had truly mastered chess, the artificial intelligence (AI) community had concluded that mastering the game was largely irrelevant to their goals. AI pioneers Herbert Simon and John McCarthy have referred to chess as “the Drosophila of AI.” By this they mean that, like the ubiquitous fruit flies in genetics research, chess became an easy way to measure computer prowess. But what was it measuring? The dominant brute-force approach was more a measure of computing power than the application of such AI techniques as pattern recognition. (There is, however, still some interest in writing chess programs that “think” more like a human player.) In recent years there has been some interest in programming computers to play the Asian board game Go, where positional and structural elements play a greater role than in chess. However, even the latest generation of Go programs seem to be relying more on a statistical approach than a deep conceptual analysis.

No comments:

Post a Comment