July 29, 1997
To Test a Powerful Computer,
Play an Ancient GameBy GEORGE JOHNSON
eep Blue's recent trouncing of Garry Kasparov sent shock waves through the Western world. In much of the Orient, however, the news that a computer had beaten a chess champion was likely to have been met with a yawn.
Credit: Linda Rosier for The New York Times While there are avid chess players in Japan, China, Korea and throughout the East, far more popular is the deceptively simple game of Go, in which black and white pieces called stones are used to form intricate, interlocking patterns that sprawl across the board. So subtle and beautiful is this ancient game that, to hear aficionados describe it, Go is to chess what Asian martial arts like aikido are to a boxing match.
And, Go fans proudly note, a computer has not come close to mastering what remains a uniquely human game.
Over the last decade, inspired in part by a $1.4 million prize offered by a Taiwanese organization for a computer program that can beat a champion human player, designers have been coming up with better and better Go-playing machines. Later this year, about $25,000 in prizes will be given to the best programs in two annual international contests in Japan and the United States.
As impressive as the winners of these tournaments have been, they can still be defeated by even an amateur player with perhaps a year's experience.
Deep Blue defeated the world chess champion by leveraging a moderate amount of chess knowledge with a huge amount of blind, high-speed searching power.
But this roughshod approach is powerless against the intricacies of Go, leaving computers at a distinct disadvantage. "Brute-force searching is completely and utterly worthless for Go," said David Fotland, a computer engineer for Hewlett-Packard, who is the author of one of the strongest programs, called The Many Faces of Go. "You have to make a program play smart like a person."
Seeing Subtle Patterns
In this hypothetical game, the corner positions are very similar, but the results very different: the black stones, lower right, are ''dead,'' about to be hemmed in; those in the upper left are ''alive.'' Recognizing this is relatively easy for people, relatively hard for computers.
To play a decent game of Go, a computer must be endowed with the ability to recognize subtle, complex patterns and to draw on the kind of intuitive knowledge that is the hallmark of human intelligence.
"It may be a hundred years before a computer beats humans at Go -- maybe even longer," said Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study in Princeton, N.J., and a fan of the game. "If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don't have to be a Kasparov."
When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing.
"Go is the highest intellectual game," said Dr. Chen Zhixing, a retired chemistry professor at Zhongshan University, in Guangzhou, China.
Zhixing has spent the last six years perfecting Handtalk, the winner of several recent international competitions. In Go, he said, the mind is dazzled by the beauty of the patterns unfolding on the board, and a sequence of moves can be as mesmerizing as a melody. The trick is to get a computer to compose and understand this visual music.
On its surface, Go seems simple compared with chess. A chess match begins with two facing armies of 16 pieces, ranking from pawn to king, on a 64-square board. Each of the six kinds of pieces is allowed to move only in certain ways -- bishops diagonally; knights in L-shaped paths.
In Go there are few such complications. All of a player's stones are identical. A game begins with a blank 19-by-19 grid (sometimes smaller ones are used), and the two contestants take turns placing their stones (black for one side, white for the other) on any of the unoccupied intersections.
A player can capture a group of an opponent's stones by surrounding it and then removing the cluster from the board. The object of the game is to build complex fence-like structures enclosing as much territory as possible.
Too Many Possibilities
Even after 30 moves, the number of possible continuations is overwhelming, with 331 ''legal'' moves for black. A computer might consider all of them and their myriad continuations; an adept human player would choose from far fewer ''reasonable'' choices.
"In chess you start with everything you have on the board," said Tim Klinger, a graduate student in computer science at New York University who is studying computer Go. "In Go you start from nothing and build."
Stone by stone, you try to construct enclaves, engulfing those of your opponent, who is all the time trying to engulf your own. Adding to the complications, there are usually several skirmishes going on simultaneously in different corners of the board. If chess is like a medieval battle, it is sometimes said, Go is more like a world war. And it can be maddeningly difficult to determine who is ahead.
"In chess, if a player loses even a single pawn at world champion level, it can decide the game maybe 99 percent of the time," said Dr. Hans Berliner, a computer scientist at Carnegie-Mellon University in Pittsburgh who is an expert on computer chess. "In Go, you keep hearing people say that you can lose a life-and-death battle along the edge of the board, but that is far from deciding the outcome. You can go on to other battles. It's a very different kind of game."
From the point of view of a computer, the difference could not be more profound. Because of the tight constraints in how chess pieces can be moved, a player is faced with an average of only about 35 legal moves to consider with each turn. Computer programs like Deep Blue analyze these moves, considering the opponent's possible countermoves, and then the countermoves to the countermoves. In computer chess terminology, each move and its response is called a ply. The fastest chess programs look ahead seven or eight plies into the game.
The result is a densely proliferating tree of possibilities with the branches and twigs representing all the different ways the game could unfold. Looking ahead just seven plies (14 individual chess moves) requires examining 35 to the 14th power (more than a billion trillion) leaves representing all the various outcomes.
As the computer tries to look deeper, the number of possibilities explodes. Programmers have learned clever ways to "prune" the trees, so that all but a fraction of the paths can be discarded without plumbing them all the way to the bottom. Even so, a chess-playing computer looking ahead seven plies might consider as many as 50 or 60 billion scenarios each time its turn comes around.
As bad as that sounds, in Go the situation is drastically worse. The tree of possible moves is so broad and dense that not even the fastest computer can negotiate it. The first player can put a stone in any of 361 places; the opponent can respond by placing a stone on any of 360 places, and so on. As the game continues, there are steadily fewer possible places to play. But, on average, a player is faced with about 200 possible moves, compared with just 35 in chess.
As a computer scientist would put it, the branching factor is much higher for Go than for chess. In chess the approximate number of possible board positions after only four moves is typically 35 times 35 times 35 times 35 equals 1,500,625. For Go, the number is 200 times 200 times 200 times 200 equals 1,600,000,000 -- and far more toward the beginning of a game. Search one ply deeper and the numbers rapidly diverge: about 1.8 billion possible outcomes for chess and 64 trillion for Go.
Looking ahead 14 moves, or seven plies, in Go creates a search tree not with a mere 35 to the 14th power leaves, as for chess, but with more than 200 to the 14th power leaves. Pruning techniques cut this down to about ten thousand trillion possibilities to consider. Still, a Go computer as fast as Deep Blue (which analyzed some 200 million chess positions per second) would take a year and a half to mull over a single move.
Where Do They Stand?
In this actual tournament game, after 63 moves, most computer Go programs would have difficulty answering these questions: Is group ''a'' dead or alive? What about group ''b'' and the single stone marked ''c''?
answers
Even worse, performing so laborious a search would give the computer no significant advantage over its human opponent. After sifting through the myriad possibilities, a chess-playing computer tries to choose the move that will leave it in the strongest position. It determines this by using fairly simple formulas called evaluation functions. Each piece can be assigned a number indicating its rank (pawns are worth 1, knights and bishops 3, rooks 5, queens 9). This figure can be multiplied by another number indicating the strength of the piece's position on the board. Other formulas quantify concepts like "king safety," or how wellprotected that piece is. These rules, called heuristics, are hardly infallible, but they give the computer a rough sense of the state of the game and a basis on which to make its decisions.
Go does not succumb to such simple analysis. There is no single piece, like a king, whose loss decides the game. Even counting the amount of territory each player has captured is not very revealing. With the placement of a single stone, a seeming underdog might surround the grand structure his opponent has been assiduously building and turn it -- jujitsu-like -- into his own. "You're stringing all these stones together, and if you don't watch out the whole collection becomes dinner for your opponent," Klinger said.
Expert Go players evaluate the state of the board by using their skills at pattern recognition, and these are very hard to capture in an algorithm. After years of experience, they can look at a complex configuration and sense whether it is "alive," meaning that it is constructed in such a way that it cannot be captured, or "dead," so that no amount of reinforcement can save it. Learning to sense life and death is crucial. A player does not want to waste stones attacking a group that is invulnerable, or defending one that is doomed. Sometimes there are fairly obvious clues: if a group of stones contains two configurations called eyes, it can fend off any attempt to capture it. But often the difference between life and death is difficult to perceive, hinging on a single stone.
Go masters can also sense whether several unconnected stones might be slowly joined to form a group, or whether two smaller groups might be combined into a larger, stronger whole.
To get a computer to do this kind of analysis, programmers must confront fundamental problems in artificial intelligence. Fotland armed his program, The Many Faces of Go, with basic concepts like territory and connectivity (whether several stones are in adjacent positions). It can also recognize some 1,100 different patterns, each of which sets off a sequence of suggested moves, and it has access to about 200 higher-level strategic notions like "attack a weak group" or "expand into a potential territory" or "if behind, make unreasonable invasions that you don't expect to work." Like Deep Blue, the program draws on a library of standard openings and other commonly used plays. Drawing on this knowledge, it will consider only about 5 or 10 of the approximately 200 possible moves available to it in a typical turn.
But programming this kind of knowledge is extremely difficult. "People are so good at dealing with fuzzy concepts," said David Mechner, a doctoral student in neural science at New York University who is a top-ranked amateur Go player. But how do you tell a computer that several stones might end up being connected, but not necessarily? Mechner and Klinger are studying these kinds of problems and fine-tuning an algorithm for recognizing life and death. They hope to soon join the handful of programmers competing to make the best Go program.
The winner of the FOST Cup, sponsored by the Japanese Fusion of Science and Technology organization and held in Nagoya next month as part of the International Joint Conference on Artificial Intelligence, will get about $17,000. The contest for the $7,000 Ing Cup, sponsored by the Ing Chang-ki Wei-Chi Educational Foundation in Taipei, will be held in November in the San Francisco Bay Area. (The winner will have the opportunity to challenge three young Go players for additional prizes).
But winning the $1.4 million prize promised by the Ing foundation to a program that beats a human champion may be an impossible dream. The offer expires in the year 2000. Go programmers are hoping it will be extended for another century or two.
Answers to questions about the third game:An experienced Go player would say the group of white stones marked "a" is dead, because it is vulnerable to being encircled as more black stones are played.
The block labeled "b" is alive, with the potential to protect itself from encirclement.
The white stone labeled "c" is dead.
Related Sites
Following are links to the external Web sites mentioned in this article. These sites are not part of The New York Times on the Web, and The Times has no control over their content or availability. When you have finished visiting any of these sites, you will be able to return to this page by clicking on your Web browser's "Back" button or icon until this page reappears.
- David Fotland's The Many Faces of Go
- Information on Handtalk from the American Go Association
- Information on books about go
Copyright 1997 The New York Times Company