0% found this document useful (0 votes)
104 views10 pages

Computer Go

Computer Go is the study of developing computer programs that can play the board game Go. Recent programs have achieved high dan amateur levels against human players by using techniques like Monte Carlo tree search and machine learning to evaluate board positions, rather than relying solely on domain expertise. The best programs to date have beaten professional human players in 9x9 and 19x19 games while receiving handicaps of 6-9 stones, but still face obstacles to reaching the highest professional human levels of play.

Uploaded by

Sanjuro San
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views10 pages

Computer Go

Computer Go is the study of developing computer programs that can play the board game Go. Recent programs have achieved high dan amateur levels against human players by using techniques like Monte Carlo tree search and machine learning to evaluate board positions, rather than relying solely on domain expertise. The best programs to date have beaten professional human players in 9x9 and 19x19 games while receiving handicaps of 6-9 stones, but still face obstacles to reaching the highest professional human levels of play.

Uploaded by

Sanjuro San
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Computer Go

This article is about the study of Go (board game) in Recent developments in Monte Carlo tree search and
artificial intelligence. For the computer programming machine learning have brought the best programs to high
language called Go, see Go (programming language). dan level on the small 9x9 board. In 2009, the first such
Not to be confused with Go software. programs appeared which could reach and hold low dan-
level ranks on the KGS Go Server also on the 19x19
Computer Go is the field of artificial intelligence (AI) board.
dedicated to creating a computer program that plays Go,
a traditional board game. 1.1 Recent results
In 2008, thanks to an efficient message-passing paral-
1 Performance lelization, MoGo won one game (out of three) against
Cătălin Ţăranu, 5th dan pro, in 9x9 with standard time
settings (30 minutes per side). MoGo was running on
Go has long been considered a difficult challenge in the a cluster provided by “Bull” (32 nodes with 8 cores per
field of AI and is considerably more difficult to solve than node, 3 GHz); the machine was down during one of the
chess. Mathematician I. J. Good wrote in 1965:[1] lost games. The results of this event were approved by
the French Federation of Go. MoGo also played a 19x19
Game against Cătălin Ţăranu and lost in spite of 9 stones
Go on a computer? – In order to pro-
handicap. However, MoGo was in good position dur-
gramme a computer to play a reasonable game
ing most of the game, and lost due to a bad choice in
of Go, rather than merely a legal game – it is
a ko situation at the end. The machine used for this event
necessary to formalise the principles of good
(the IAGO challenge, organized by the company “Recit-
strategy, or to design a learning programme.
sproque”) is a good one, but far from the top level in in-
The principles are more qualitative and myste-
dustry .
rious than in chess, and depend more on judg-
ment. So I think it will be even more difficult On August 7, 2008, the computer program MoGo run-
to programme a computer to play a reasonable ning on 25 nodes (800 cores, 4 cpus per node with each
game of Go than of chess. core running at 4.7 GHz to produce 15 Teraflops)[6] of
the Huygens cluster in Amsterdam beat professional Go
player Myungwan Kim (8p) while receiving a nine stone
The first Go program was written by Albert Zobrist in handicap game on the 19x19 board on the KGS Go
1968 as part of his thesis on pattern recognition. It in-
Server. MoGo won by 1.5 points. Mr. Kim used around
troduced an Influence function to estimate territory and 13 minutes of time while MoGo took around 55; how-
Zobrist hashing to detect ko.
ever, he felt that using more time would not have helped
In April 1981 Jonathan K Millen published an article in him win. In after-game commentary, Kim estimated the
Byte discussing Wally, a Go program with a 15x15 board playing strength of this machine as being in the range of
that fit within the KIM-1 microcomputer’s 1K RAM.[2] 2–3 amateur dan.[7] MyungWan and MoGo played a to-
Bruce F. Webster published an article in the magazine in tal of 4 games of varying handicaps and time limits, each
November 1984 discussing a Go program he had writ- side winning two games. The game records are accessible
ten for the Apple Macintosh, including the MacFORTH on KGS where MoGo played as MogoTitan. In a rematch
source.[3] on September 20, Kim won two games giving MoGo nine
[8]
In 1998, very strong players were able to beat computer stones. On August 26, 2008, Mogo beat an amateur 6d
programs while giving handicaps of 25–30 stones, an while receiving five stones of handicap, [9]
this time running
enormous handicap that few human players would ever on 200 cores of the Huygens cluster.
take. There was a case in the 1994 World Computer Go On September 4, 2008, the program Crazy Stone running
Championship where the winning program, Go Intellect, on an 8-core personal computer won against 30 year-old
lost all 3 games against the youth players while receiv- professional, Kaori Aoba (4p), receiving a handicap of
ing a 15-stone handicap.[4] In general, players who un- eight stones. The time control was 30 seconds per move.
derstood and exploited a program’s weaknesses could win White resigned after 185 moves. The game was played
with much larger handicaps than typical players.[5] during the FIT2008 conference in Japan.[10]

1
2 2 OBSTACLES TO HIGH-LEVEL PERFORMANCE

In February 2009, MoGo won two 19x19 games against In March 2012, computer program Zen19D reached the
professional Go players in the Taiwan Open 2009. With rank of 6 dan on the KGS Go Server, playing games of
a 7-stones handicap the program defeated Zhou Junxun 15 seconds per move. The account which reached that
(9p), and with a 6-stones handicap it defeated Li-Chen rank uses a cluster version of Zen running on a 28-core
Chien (1p).[11] machine.[20] The Zen version which achieved that rank is
On February 14, 2009, Many Faces of Go running on a 9.2d10.
32-core Xeon cluster provided by Microsoft won against In March 2013, Crazy Stone beat Yoshio Ishida in a
James Kerwin (1p) while receiving a handicap of seven 19×19 game with four handicap stones.[22]
stones. The game was played during the 2009 AAAS gen- On June 5, 2013, computer program Zen defeated Takuto
eral meeting in Chicago.[12] Ooomote with a 3 stone handicap. Takuto Ooomote is a 9
On August 7, 2009, Many Faces of Go (version 12) re- dan on the Tygem server. The 19×19 game used Japanese
signed against Myungwan Kim (8p) in a 7-stone handicap rules with a time setting of 60 minutes plus 30 seconds
game.[13] Many Faces was playing on a 32 node system byoyomi. They played at the 27th Annual Conference of
provided by Microsoft. The “Man vs. Machine” event The Japanese Society for Artificial Intelligence.[23]
was part of the 2009 US Go Congress, which was held in
Washington DC from August 1 to August 9.[14]
On August 21 and 22, 2009, Zhou Junxun (9p) beat Many 2 Obstacles to high-level perfor-
Faces of Go, MoGo, and Zen in full-board 7-stone games, mance
beat MoGo in an even 9×9 game, and won one and lost
one even 9×9 game against Fuego.[15]
For a long time it was a widely held opinion that com-
On July 20, 2010, MoGoTW won an even 9×9 game as puter Go posed a problem fundamentally different from
white against Zhou Junxun (9p).[16] computer chess insofar as it was believed that methods
On July 20, 2010, at the 2010 IEEE World Congress on relying on fast global search compared to human experts
Computational Intelligence in Barcelona Spain, computer combined to relatively little domain knowledge would not
program Zen (written by Yamato of Japan) played profes- be effective for Go. Therefore, a large part of the com-
sional 4 dan Ping-Chiang Chou of Taiwan in 19x19 Go. puter Go development effort was during these times fo-
Zen received a 6 stone handicap. Each side had 45 min- cused on ways of representing human-like expert knowl-
utes. Zen won the game.[17] edge and combining this with local search to answer ques-
tions of a tactical nature. The result of this were programs
On July 28, 2010, at the 2010 European Go Congress in that handled many situations well but which had very pro-
Finland, computer program MogoTW played European nounced weaknesses compared to their overall handling
professional 5 dan Catalin Taranu 19x19 Go. MogoTW of the game. Also, these classical programs gained almost
received a 7 stone handicap, and won. MogoTW is a nothing from increases in available computing power per
joint project between the MoGo team and a Taiwanese se and progress in the field was generally slow.
team.[18]
A few researchers grasped the potential of probabilistic
In December 2010, computer program Zen reached the methods and predicted that they would come to domi-
rank 4 dan on the server KGS. Zen was written by nate computer game-playing,[24] but many others consid-
Japanese programmer Yoji Ojima.[19] ered a strong Go-playing program something that could
In June 2011, computer program Zen19D reached the be achieved only in the far future, as a result of funda-
rank of 5 dan on the server KGS, playing games of 15 sec- mental advances in general artificial intelligence technol-
onds per move. The account which reached that rank uses ogy. Even writing a program capable of automatically
a cluster version of Zen running on a 26-core machine.[20] determining the winner of a finished game was seen as
In June 2011, computer program Zen19S achieved a rank no trivial matter.
of 4 dan on the KGS Go Server. Zen19S plays at 20 min- The advent of programs based on Monte Carlo search
utes main time and then 30 seconds per move. In June starting in 2006 changed this situation in many ways with
2011, Zen19S played 518 games. A player can download the first 9-dan professional Go players being defeated in
the games of Zen19S from the KGS server. The player 2013 by multicore computers, albeit with 4-stone handi-
can study the games to find the program’s weakness and cap.
try to exploit it.[19]
Zen matches against Ohashi Hirofumi and Takemiya
Masaki were announced in February 2012.[21] On March 2.1 Size of board
17, 2012 Zen beats Takemiya 9p at 5 stones by eleven
points followed by a stunning twenty point win at a 4 stone The large board (19×19, 361 intersections) is often noted
handicap. Takemiya remarked “I had no idea that com- as one of the primary reasons why a strong program is
puter go had come this far.” hard to create. The large board size is a problem to the ex-
tent that it prevents an alpha-beta searcher without signif-
2.6 Combinatorial problems 3

icant search extensions or pruning heuristics from achiev- chess evaluation function, when combined with more sub-
ing deep look-ahead. tle considerations like isolated/doubled pawns, rooks on
So far, the largest game of Go being completely solved open files (columns), pawns in the center of the board and
has been played on a 5×5 board. It was achieved in 2002, so on. These rules can be formalized easily, providing a
with black winning by 25 points (the entire board), by a reasonably good evaluation function that can run quickly.
computer program called MIGOS (MIni GO Solver).[25] These types of positional evaluation rules cannot effi-
ciently be applied to Go. The value of a Go position
depends on a complex analysis to determine whether or
2.2 Most moves are possible not the group is alive, which stones can be connected to
one another, and heuristics around the extent to which
Continuing the comparison to chess, Go moves are not a strong position has influence, or the extent to which a
as limited by the rules of the game. For the first move weak position can be attacked.
in chess, the player has twenty choices. Go players be- More than one move can be regarded as the best depend-
gin with a choice of 55 distinct legal moves, accounting ing on which strategy is used. In order to choose a move,
for symmetry. This number rises quickly as symmetry is the computer must evaluate different possible outcomes
broken and soon almost all of the 361 points of the board and decide which is best. This is difficult due to the deli-
must be evaluated. Some are much more popular than cate trade-offs present in Go. For example, it may be pos-
others, some are almost never played, but all are possi- sible to capture some enemy stones at the cost of strength-
ble. ening the opponent’s stones elsewhere. Whether this is a
good trade or not can be a difficult decision, even for hu-
man players. The computational complexity also shows
2.3 Additive nature of the game here as a move might not be immediately important, but
after many moves could become highly important as other
As a chess game progresses (as well as many other games areas of the board take shape.
such as checkers, draughts, and backgammon), pieces
disappear from the board, simplifying the game. Each
new Go move, on the contrary, adds new complexities 2.6 Combinatorial problems
and possibilities to the situation, at least until an area be-
comes developed to the point of being 'settled'. Sometimes it is mentioned in this context that various dif-
On the other hand, it is argued that as chess enters certain ficult combinatorial problems (in fact, any NP-hard prob-
endgames, databases must be employed by computers to lem) can be converted to Go-like problems on a suffi-
deal with added complexities. Without an agreed defini- ciently large board; however, the same is true for other
tion of “complexity”, this issue remains in dispute. “Over abstract board games, including chess and minesweeper,
the years, much has been written about the weakness of when suitably generalized to a board of arbitrary size.
computers in the endgame — of how they were so short- NP-complete problems do not tend in their general case
sighted with respect to the creation of passed pawns, or to be easier for unaided humans than for suitably pro-
unwilling to centralize their king when it was the only log- grammed computers: it is doubtful that unaided humans
ical thing to do.” [from the section on “Computer Chess” would be able to compete successfully against comput-
by Graham Burgess, in the Mammoth Book of Chess, Car- ers in solving, for example, instances of the subset sum
roll & Graf 1997] problem. Hence, the idea that we can convert some NP-
complete problems into Go problems does not help in ex-
plaining the present human superiority in Go.
2.4 Techniques in chess that cannot be ap-
plied to Go
2.7 Endgame
The general weakness of computer Go programs com-
pared with computer chess programs has served to gen- Given that the endgame contains fewer possible moves
erate research into many new programming techniques. than the opening (fuseki) or middle game, one could sup-
The techniques that proved to be the most effective in pose that it was easier to play, and thus that computers
computer chess have generally shown to be mediocre at should be easily able to tackle it. In chess, computer
Go. programs perform worse in chess endgames because the
ideas are long-term, unless the number of pieces is re-
duced to an extent that allows taking advantage of solved
2.5 Evaluation function endgame tablebases.
The application of surreal numbers to the endgame in Go,
While a simple material counting evaluation is not suffi- a general game analysis pioneered by John H. Conway,
cient for decent play in chess, it is often the backbone of a has been further developed by Elwyn R. Berlekamp and
4 4 STATE REPRESENTATION

David Wolfe and outlined in their book, Mathematical Go 3 Tactical search


(ISBN 978-1-56881-032-4). While not of general utility
in most playing circumstances, it greatly aids the analysis One of the main concerns for a Go player is which groups
of certain classes of positions. of stones can be kept alive and which can be captured.
Nonetheless, although elaborate study has been con- This general class of problems is known as life and death.
ducted, Go endgames have been proven to be PSPACE- The most direct strategy for calculating life and death is
hard. There are many reasons why they are so hard: to perform a tree search on the moves which potentially
affect the stones in question, and then to record the status
• Even if a computer can play each local endgame area of the stones at the end of the main line of play.
flawlessly, we cannot conclude that its plays would However, within time and memory constraints, it is not
be flawless in regard to the entire board. Additional generally possible to determine with complete accuracy
areas of consideration in endgames include sente which moves could affect the 'life' of a group of stones.
and gote relationships, prioritization of different lo- This implies that some heuristic must be applied to select
cal endgames, territory counting & estimation, and which moves to consider. The net effect is that for any
so on. given program, there is a trade-off between playing speed
and life and death reading abilities.
• The endgame may involve many other aspects of Go,
including 'life and death', which are also known to be With Benson’s algorithm, it is possible to determine
NP-hard.[26][27] the chains which are unconditionally alive and therefore
would not need to be checked in the future for safety.
• Each of the local endgame areas may affect one an-
other. In other words, they are dynamic in nature
although visually isolated. This makes it much more
difficult for computers to deal with. This nature
4 State representation
leads to some very complex situations like Triple Ko,
Quadruple Ko, Molasses Ko and Moonshine Life. An issue that all Go programs must tackle is how to rep-
resent the current state of the game. For programs that
Thus, it is very unlikely that it will be possible to program use extensive searching techniques, this representation
a reasonably fast algorithm for playing the Go endgame needs to be copied and/or modified for each new hypo-
flawlessly, let alone the whole Go game.[28] thetical move considered. This need places the additional
constraint that the representation should either be small
enough to be copied quickly or flexible enough that a
2.8 Why humans are (still) better at Go move can be made and undone easily.
The most direct way of representing a board is as a one- or
Go has features that might be easier for humans than two-dimensional array, where elements in the array rep-
computers.[29] The pieces never move about (as they do resent points on the board, and can take on a value cor-
in Chess), nor change state (as they do in Reversi). Some responding to a white stone, a black stone, or an empty
speculated that these features make it easy for humans intersection. Additional data is needed to store how many
to “read” (predict possible variations) long sequences of stones have been captured, whose turn it is, and which in-
moves, while being irrelevant to a computer program, but tersections are illegal due to the Ko rule.
no rigorous cognitive neuroscientific evidence currently
Most programs, however, use more than just the raw
exists to back this hypothesis.
board information to evaluate positions. Data such as
which stones are connected in strings, which strings are
associated with each other, which groups of stones are in
2.9 Why computers are (still) better at Go
risk of capture and which groups of stones are effectively
dead are necessary to make an accurate evaluation of the
In those rare Go positions known as "ishi-no-shita", in
position. While this information can be extracted from
which stones are repeatedly captured and re-played on
just the stone positions, much of it can be computed more
the same points, humans have reading problems because
quickly if it is updated in an incremental, per-move ba-
sometimes the length of the looping steps can be too large
sis. This incremental updating requires more information
for human memory, while they are easy for computers.
to be stored as the state of the board, which in turn can
make copying the board take longer. This kind of trade-
2.10 Order of play off is indicative of the problems involved in making fast
computer Go programs.
Current Monte-Carlo-based Go engines can have dif- An alternative method is to have a single board and make
ficulty solving problems when the order of moves is and take back moves so as to minimize the demands on
important.[30] computer memory and have the results of the evaluation
5.2 Design philosophies 5

of the board stored. This avoids having to copy the infor- have seen less success in Computer Go programs. This is
mation over and over again. partly because it has traditionally been difficult to create
an effective evaluation function for a Go board, and partly
because the large number of possible moves each side can
5 System design make each leads to a high branching factor. This makes
this technique very computationally expensive. Because
of this, many programs which use search trees extensively
5.1 New approaches to problems can only play on the smaller 9×9 board, rather than full
19×19 ones.
Historically, GOFAI (Good Old Fashioned AI) tech-
niques have been used to approach the problem of Go There are several techniques, which can greatly improve
AI. More recently, neural networks are being looked at the performance of search trees in terms of both speed
as an alternative approach. One example of a program and memory. Pruning techniques such as alpha–beta
which uses neural networks is WinHonte.[31] pruning, Principal Variation Search, and MTD-f can
reduce the effective branching factor without loss of
These approaches attempt to mitigate the problems of the strength. In tactical areas such as life and death, Go
game of Go having a high branching factor and numerous is particularly amenable to caching techniques such as
other difficulties. transposition tables. These can reduce the amount of re-
Computer Go research results are being applied to other peated effort, especially when combined with an iterative
similar fields such as cognitive science, pattern recogni- deepening approach. In order to quickly store a full-sized
tion and machine learning.[32] Combinatorial Game The- Go board in a transposition table, a hashing technique
ory, a branch of applied mathematics, is a topic relevant for mathematically summarizing is generally necessary.
to computer Go.[32] Zobrist hashing is very popular in Go programs because
it has low collision rates, and can be iteratively updated at
each move with just two XORs, rather than being cal-
5.2 Design philosophies culated from scratch. Even using these performance-
enhancing techniques, full tree searches on a full-sized
The only choice a program needs to make is where to board are still prohibitively slow. Searches can be sped
place its next stone. However, this decision is made diffi- up by using large amounts of domain specific pruning
cult by the wide range of impacts a single stone can have techniques, such as not considering moves where your op-
across the entire board, and the complex interactions var- ponent is already strong, and selective extensions like al-
ious stones’ groups can have with each other. Various ar- ways considering moves next to groups of stones which
chitectures have arisen for handling this problem. The are about to be captured. However, both of these options
most popular use: introduce a significant risk of not considering a vital move
which would have changed the course of the game.
• some form of tree search, Results of computer competitions show that pattern
matching techniques for choosing a handful of appropri-
• the application of Monte Carlo methods, ate moves combined with fast localized tactical searches
• the application of pattern matching, (explained above) were once sufficient to produce a com-
petitive program. For example, GNU Go was competi-
• the creation of knowledge-based systems, and tive until 2008.

• the use of machine learning.


5.2.2 Knowledge-based systems
Few programs use only one of these techniques exclu-
sively; most combine portions of each into one synthetic Novices often learn a lot from the game records of old
system. games played by master players. There is a strong hy-
pothesis that suggests that acquiring Go knowledge is a
key to make a strong computer Go. For example, Tim
5.2.1 Minimax tree search
Kinger and David Mechner argue that “it is our belief
One traditional AI technique for creating game playing that with better tools for representing and maintaining Go
software is to use a minimax tree search. This involves knowledge, it will be possible to develop stronger Go pro-
playing out all hypothetical moves on the board up to a grams.” They propose two ways: recognizing common
certain point, then using an evaluation function to esti- configurations of stones and their positions and concen-
mate the value of that position for the current player. The trating on local battles. "... Go programs are[33]
still lacking
move which leads to the best hypothetical board is se- in both quality and quantity of knowledge.”
lected, and the process is repeated each turn. While tree After implementation, the use of expert knowledge has
searches have been very effective in computer chess, they been proved very effective in programming Go software.
6 6 COMPUTER PROGRAMS

Hundreds of guidelines and rules of thumb for strong play In 2006, a new search technique, upper confidence bounds
have been formulated by both high level amateurs and applied to trees (UCT), was developed and applied to
professionals. The programmer’s task is to take these many 9x9 Monte-Carlo Go programs with excellent re-
heuristics, formalize them into computer code, and uti- sults. UCT uses the results of the play outs collected so
lize pattern matching and pattern recognition algorithms far to guide the search along the more successful lines of
to recognize when these rules apply. It is also important play, while still allowing alternative lines to be explored.
to have a system for determining what to do in the event The UCT technique along with many other optimizations
that two conflicting guidelines are applicable. for playing on the larger 19x19 board has led MoGo to
become one of the strongest research programs. Success-
Most of the relatively successful results come from pro-
grammers’ individual skills at Go and their personal con- ful early applications of UCT methods to 19x19 Go in-
clude MoGo, Crazy Stone, and Mango. MoGo won the
jectures about Go, but not from formal mathematical as-
sertions; they are trying to make the computer mimic the 2007 Computer Olympiad and won one (out of three)
blitz game against Guo Juan, 5th Dan Pro, in the much
way they play Go. “Most competitive programs have re-
quired 5–15 person-years of effort, and contain 50–100 less complex 9x9 Go. The Many Faces of Go won the
modules dealing with different aspects of the game.”[34] 2008 Computer Olympiad after adding UCT search to
its traditional knowledge-based engine.
This method has until recently been the most successful
technique in generating competitive Go programs on a
full-sized board. Some example of programs which have 5.2.4 Machine learning
relied heavily on expert knowledge are Handtalk (later
known as Goemate), The Many Faces of Go, Go Intel- While knowledge-based systems have been very effective
lect, and Go++, each of which has at some point been at Go, their skill level is closely linked to the knowledge of
considered the world’s best Go program. their programmers and associated domain experts. One
way to break this limitation is to use machine learning
Nevertheless, adding knowledge of Go sometimes weak-
techniques in order to allow the software to automati-
ens the program because some superficial knowledge
cally generate rules, patterns, and/or rule conflict reso-
might bring mistakes: “the best programs usually play
lution strategies.
good, master level moves. However, as every games
player knows, just one bad move can ruin a good game. This is generally done by allowing a neural network or
Program performance over a full game can be much lower genetic algorithm to either review a large database of
than master level.”[34] professional games, or play many games against itself or
other people or programs. These algorithms are then able
to utilize this data as a means of improving their perfor-
mance. Notable programs using neural nets are NeuroGo
5.2.3 Monte-Carlo methods and WinHonte.
Machine learning techniques can also be used in a less
Main article: Monte-Carlo tree search
ambitious context to tune specific parameters of pro-
grams which rely mainly on other techniques. For ex-
One major alternative to using hand-coded knowledge ample, Crazy Stone learns move generation patterns from
and searches is the use of Monte Carlo methods. This several hundred sample games, using a generalization of
is done by generating a list of potential moves, and for the Elo rating system.[35]
each move playing out thousands of games at random on
the resulting board. The move which leads to the best set
of random games for the current player is chosen as the
best move. The advantage of this technique is that it re-
6 Computer programs
quires very little domain knowledge or expert input, the
trade-off being increased memory and processor require- See also: Go software
ments. However, because the moves used for evaluation
are generated at random it is possible that a move which
would be excellent except for one specific opponent re- • AYA by Hiroshi Yamashita
sponse would be mistakenly evaluated as a good move.
The result of this are programs which are strong in an • Crazy Stone by Rémi Coulom (sold as Saikyo no Igo
overall strategic sense, but are weak tactically. This prob- in Japan)
lem can be mitigated by adding some domain knowledge
• DolBaram by Lim Jaebum
in the move generation and a greater level of search depth
on top of the random evolution. Some programs which • Fuego, an open source Monte Carlo program
use Monte-Carlo techniques are Fuego, The Many Faces
of Go v12, Leela, MoGo, Crazy Stone, MyGoFriend, and • Goban, Macintosh OS X Go program by Sen:te (re-
Zen. quires free Goban Extensions)
7.1 History 7

• GNU Go, an open source classical Go program 7.1 History

• Go++ by Michael Reiss (sold as Strongest Go or The first computer Go competition was sponsored by
Tuyoi Igo in Japan) Acornsoft, and the first regular ones by USENIX. They
ran from 1984 to 1988. These competitions introduced
• Go Intellect by Ken Chen Nemesis, the first competitive Go program from Bruce
Wilcox, and G2.5 by David Fotland, which would later
• Handtalk/Goemate, developed in China by Zhixing evolve into Cosmos and The Many Faces of Go.
Chen (sold as Shudan Taikyoku in Japan) One of the early drivers of computer Go research was
the Ing Prize, a relatively large money award sponsored
• Haruka by Ryuichi Kawa (sold as Saikouhou in by Taiwanese banker Ing Chang-ki, offered annually
Japan) between 1985 and 2000 at the World Computer Go
Congress (or Ing Cup). The winner of this tournament
• Indigo by Bruno Bouzy was allowed to challenge young players at a handicap in
a short match. If the computer won the match, the prize
• Katsunari by Shin-ichi Sei was awarded and a new prize announced: a larger prize
for beating the players at a lesser handicap. The series
• KCC Igo, from North Korea (sold as Silver Star or of Ing prizes was set to expire either 1) in the year 2000
Ginsei Igo in Japan) or 2) when a program could beat a 1-dan professional at
no handicap for 40,000,000 NT dollars. The last winner
was Handtalk in 1997, claiming 250,000 NT dollars for
• Leela, the first Monte Carlo program for sale to the
winning an 11-stone handicap match against three 11–13
public
year old amateur 2–6 dans. At the time the prize expired
in 2000, the unclaimed prize was 400,000 NT dollars for
• The Many Faces of Go by David Fotland (sold as AI
winning a 9-stone handicap match.[36]
Igo in Japan)
Many other large regional Go tournaments (“congresses”)
• MyGoFriend by Frank Karger had an attached computer Go event. The European Go
Congress has sponsored a computer tournament since
• MoGo by Sylvain Gelly; parallel version http:// 1987, and the USENIX event evolved into the US/North
www.lri.fr/~{}teytaud/mogo.html by many people. American Computer Go Championship, held annually
from 1988-2000 at the US Go Congress.
• Pachi open source Monte Carlo program by Japan started sponsoring computer Go competitions in
Petr Baudiš, online version Peepo by Jonathan 1995. The FOST Cup was held annually from 1995 to
Chetwynd, with maps and comments as you play 1999 in Tokyo. That tournament was supplanted by the
Gifu Challenge, which was held annually from 2003 to
• Smart Go by Anders Kierulf, inventor of the Smart 2006 in Ogaki, Gifu. The UEC Cup has been held annu-
Game Format ally since 2007.

• Zen by Yoji Ojima aka Yamato (sold as Tencho no


7.2 Rule Formalization Problems in
Igo in Japan); parallel version by Hideki Kato.
computer-computer games
When two computers play a game of Go against each
7 Competitions among computer other, the ideal is to treat the game in a manner identical
to two humans playing while avoiding any intervention
Go programs from actual humans. However, this can be difficult dur-
ing end game scoring. The main problem is that Go play-
Several annual competitions take place between Go com- ing software, which usually communicates using the stan-
puter programs, the most prominent being the Go events dardized Go Text Protocol (GTP), will not always agree
at the Computer Olympiad. Regular, less formal, com- with respect to the alive or dead status of stones.
petitions between programs occur on the KGS Go Server While there is no general way for two different programs
(monthly) and the Computer Go Server (continuous). to “talk it out” and resolve the conflict, this problem
Prominent go-playing programs include Crazy Stone, is avoided for the most part by using Chinese, Tromp-
Zen, Aya, Mogo, The Many Faces of Go, pachi and Taylor, or AGA rules in which continued play (without
Fuego, all listed above; and Taiwanese-authored cold- penalty) is required until there is no more disagreement
milk, Dutch-authored Steenvreter, and Korean-authored on the status of any stones on the board. In practice, such
DolBaram. as on the KGS Go Server, the server can mediate a dis-
8 9 REFERENCES

pute by sending a special GTP command to the two client 9 References


programs indicating they should continue placing stones
until there is no question about the status of any particular [1] http://www.chilton-computing.org.uk/acl/literature/
group (all dead stones have been captured). The CGOS reports/p019.htm
Go Server usually sees programs resign before a game has
even reached the scoring phase, but nevertheless supports [2] Millen, Jonathan K (April 1981). “Programming the
Game of Go”. Byte. p. 102. Retrieved 18 October 2013.
a modified version of Tromp-Taylor rules requiring a full
play out. [3] Webster, Bruce (November 1984). “A Go Board for the
It should be noted that these rule sets mean that a program Macintosh”. Byte. p. 125. Retrieved 23 October 2013.
which was in a winning position at the end of the game un- [4] Program versus Human Performance
der Japanese rules (when both players have passed) could
lose because of poor play in the resolution phase, but this [5] See for instance http://www.intgofed.org/history/
is not a common occurrence and is considered a normal computer_go_dec2005.pdf Archived May 28, 2008 at
part of the game under all of the area rule sets. the Wayback Machine

The main drawback to the above system is that some [6] http://www.nwo.nl/nwohome.nsf/pages/NWOA_
rule sets (such as the traditional Japanese rules) penalize 7HHBNS
the players for making these extra moves, precluding the
[7] Computer Beats Pro at U.S. Go Congress http://www.
use of additional playout for two computers. Neverthe- usgo.org/index.php?%23_id=4602
less, most modern Go Programs support Japanese rules
against humans and are competent in both play and scor- [8] September 21, 2008; Volume 9, #49 SPECIAL EDI-
ing (Fuego, Many Faces of Go, SmartGo, etc.). TION!
Historically, another method for resolving this problem [9] Sensei’s Library: MoGo
was to have an expert human judge the final board. How-
ever, this introduces subjectivity into the results and the [10] Crazy Stone defeated 4-dan professional player with a
risk that the expert would miss something the program handicap of 8 stones.
saw. [11] “French software and Dutch national Supercomputer
Huygens establish a new world record in Go”. The Nether-
lands Organization for Scientific Research (NWO). 25
February 2009. Retrieved 2009-03-06.
7.3 Testing
[12] Many Faces of Go defeated 1-dan professional player with
Many programs are available that allow computer Go en- a handicap of 7 stones.
gines to play against each other and they almost always
[13] AGA News
communicate via the Go Text Protocol (GTP).
GoGUI and its addon gogui-twogtp can be used to play [14] 2009 US Go Congress
two engines against each other on a single computer [15] 2009 IEEE International Conference on Fuzzy Systems
system.[37] SmartGo and Many Faces of Go also provide
this feature. [16] Computers vs Humans in Barcelona (WCCI 2010)

To play as wide a variety of opponents as possible, the [17] Computer program Zen with 6 stone handicap beat pro-
KGS Go Server allows Go engine vs. Go engine play as fessional 4 dan Ping-Chiang Chou of Taiwan
well as Go engine vs. human in both ranked and unranked
[18] Computer program MogoTW with 7 stone handicap beat
matches. CGOS is a dedicated computer vs. computer
European professional 5 dan Catalin Taranu
Go server.
[19] Sensei’s Library KGS Bot Ratings

[20] http://www.gokgs.com/gameArchives.jsp?user=
8 See also +Zen19D

[21] http://www.lifein19x19.com/forum/viewtopic.php?f=
18&t=5572
• Computer chess
[22] " ".
• Computer Othello MSN Sankei News. Retrieved 27 March 2013.

[23] Wedd, Nick. “Human-Computer Go Challenges”. Com-


• Computer shogi puter Go Information. Retrieved 27 June 2013.

[24] Game Tree Searching with Dynamic Stochastic Control


• Go Text Protocol pp. 194–195
9

[25] 5×5 Go is solved by MIni GO Solver • Monte-Carlo Go, written by B. Bouzy and B. Helm-
stetter from Scientific Literature Digital Library
[26] On page 11: “Crasmaru shows that it is NP-complete to
determine the status of certain restricted forms of life- • Static analysis of life and death in the game of Go,
and-death problems in Go.” (See the following reference.) written by Ken Chen & Zhixing Chen, 20 February
Erik D. Demaine, Robert A. Hearn (2008-04-22). “Play- 1999
ing Games with Algorithms: Algorithmic Combinatorial
Game Theory”. arXiv:cs/0106019. • article describing the techniques underlying Mogo
[27] Marcel Crasmaru (1999). “On the complexity of Tsume-
Go”. Lecture Notes in Computer Science. Lecture Notes in
Computer Science (London, UK: Springer-Verlag) 1558: 11 External links
222–231. doi:10.1007/3-540-48957-6_15. ISBN 978-3-
540-65766-8. • Mick’s Computer Go Page
[28] See Computer Go Programming pages at Sensei’s Library • Extensive list of computer Go events
[29] Raiko, Tapani: “The Go-Playing Program Called Go81” • All systems Go by David A. Mechner (1998), dis-
section 1.2
cusses the game where professional Go player Janice
[30] example of weak play of a computer program Kim won a game against program Handtalk after
giving a 25-stone handicap.
[31] WinHonte 2.01
• Kinger, Tim and Mechner, David. An Architecture
[32] Müller, Martin. Computer Go, Artificial Intelligence 134 for Computer Go (1996)
(2002): p150
• Computer Go and Computer Go Programming
[33] Müller, Martin. Computer Go, Artificial Intelligence 134
pages at Sensei’s Library
(2002): p151
• Computer Go bibliography
[34] Müller, Martin. Computer Go, Artificial Intelligence 134
(2002): p148 • Another Computer Go Bibliography
[35] Computing Elo Ratings of Move Patterns in the Game of • Computer Go mailing list
Go
• Published articles about computer Go on Ideosphere
[36] World Computer Go Championships
gives current estimate of whether a Go program will
[37] Using GoGUI to play go computers against each other be best player in the world

• Information on the Go Text Protocol commonly


used for interfacing Go playing engines with graph-
10 Further reading ical clients and internet servers

• Co-Evolving a Go-Playing Neural Network, written • The Computer Go Room on the K Go Server (KGS)
by Alex Lubberts & Risto Miikkulainen, 2001 for online discussion and running “bots”

• Computer Game Playing: Theory and Practice, • Two Representative Computer Go Games, an article
edited by M.A. Brauner (The Ellis Horwood Series about two computer Go games played in 1999, one
in Artificial Intelligence), Halstead Press, 1983. A with two computers players, and the other a 29-stone
collection of computer Go articles. The American handicap human-computer game
Go Journal, vol. 18, No 4. page 6. [ISSN 0148- • What A Way to Go describes work at Microsoft Re-
0243] search on building a computer Go player.
• A Machine-Learning Approach to Computer Go, • Cracking Go, by Feng-hsiung Hsu, IEEE Spectrum
Jeffrey Bagdis, 2007. magazine, October 2007 argues why it should be
• Minimalism in Ubiquitous Interface Design Wren, possible to build a Go machine stronger than any
C. and Reynolds, C. (2004) Personal and Ubiqui- human player
tous Computing, 8(5), pages 370–374. Video of
computer Go vision system in operation shows in-
teraction and users exploring Joseki and Fuseki.

• Monte-Carlo Go, presented by Markus Enzen-


berger, Computer Go Seminar, University of Al-
berta, April 2004
10 12 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

12 Text and image sources, contributors, and licenses


12.1 Text
• Computer Go Source: http://en.wikipedia.org/wiki/Computer%20Go?oldid=634952682 Contributors: AxelBoldt, Derek Ross, Camem-
bert, Zippy, Stevertigo, Frecklefoot, Nealmcb, PhilipMW, Michael Hardy, TakuyaMurata, Delirium, DavidWBrooks, Angela, Den fjät-
trade ankan, Aarchiba, BenKovitz, Evercat, BAxelrod, Hike395, Charles Matthews, Crobbins, Paul Stansifer, Sanxiyn, Selket, Furrykef,
Morwen, Sander123, Fredrik, Asparagus, Xanzzibar, David Koller, Giftlite, Smjg, Risk one, Ds13, Wwoods, Everyking, Revth, Avsa,
Esrogs, Andycjp, Jonel, Bobhearn, Szcz, Sridev, Lubaf, Ben Standeven, Ylee, Evand, Liberatus, Vintermann, Blotwell, Lektu, Hu, Falco-
rian, Stuartyeates, Gmaxwell, Joriki, Firsfron, Urod, Surfeit, GregorB, Male1979, DESiegel, Graham87, Qwertyus, Rjwilmsi, OneWeird-
Dude, Airumel, Dave1g, Xavier Combelle, TheAnarcat, Kri, Gdrbot, Wavelength, GastelEtzwane, Iamfscked, Gaius Cornelius, Robaato,
Ke6jjj, Allens, Mercer66, SmackBot, Slashme, Kslays, Btwied, Gary2863, Soukie, Ohnoitsjamie, Chris the speller, Sampi, Sciyoshi,
Nossac, Mihai Capotă, MBlume, Nixeagle, Gogino, Masterdriverz, Byelf2007, RomanSpa, Samfreed, Stephen B Streater, Norm mit, Gre-
gorp, JForget, CRGreathouse, CmdrObot, Vyznev Xnebara, Drinibot, Pseudo-Richard, Cydebot, JesseChisholm, PKT, Qwyrxian, Head-
bomb, Wai Wai, Carson Reynolds, The Transhumanist, IanOsgood, Geniac, Kibiusa, Magioladitis, Ben Chen, Steven Walling, Mschribr,
Memotype, ZincBelief, Robin S, Gwern, ChadGoulding, Maproom, Anonywiki, Cosnahang, Useight, Black Kite, VolkovBot, Larry R.
Holmgren, Lradrama, Munci, Hugh16, HermanHiddema, Smilesfozwood, Unfreeride, Pagen HD, PerryTachett, DFRussia, OccamzRa-
zor, Pross356, Bender2k14, Gatlin, DumZiBoT, Addbot, Calvinhutt, Scottman1995, Microbat, Lightbot, Jarble, Phantom in ca, Teytaud,
Yobot, AnomieBOT, Citation bot, Trafford09, Citation bot 1, Tom3118, MondalorBot, JesseOfMarionHotel, RenamedUser01302013,
Lubutu, ZéroBot, Quondum, Δ, Zfeinst, Rotovok, ClueBot NG, Fkarger, MillingMachine, Seventhpath, Electriccatfish2, Vanité des van-
ités, Foreigndevilguyrico, BattyBot, PatheticCopyEditor, Dexbot, Acetotyce and Anonymous: 171

12.2 Images
• File:Animation2.gif Source: http://upload.wikimedia.org/wikipedia/commons/c/c0/Animation2.gif License: CC-BY-SA-3.0 Contribu-
tors: Own work Original artist: MG (talk · contribs)
• File:Chess.svg Source: http://upload.wikimedia.org/wikipedia/commons/0/05/Chess.svg License: LGPL Contributors: http://ftp.gnome.
org/pub/GNOME/sources/gnome-themes-extras/0.9/gnome-themes-extras-0.9.0.tar.gz Original artist: David Vignoni
• File:Edit-clear.svg Source: http://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although
minimally).”
• File:Question_book-new.svg Source: http://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
• File:Wiki_letter_w_cropped.svg Source: http://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors:
• Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen
• File:Wikibooks-logo-en-noslogan.svg Source: http://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.

12.3 Content license


• Creative Commons Attribution-Share Alike 3.0

You might also like