Abstract
We describe two Go programs, Olga and Oleg, developed by a Monte-Carlo approach that is simpler than Bruegmann’s (1993) approach. Our method is based on Abramson (1990). We performed experiments, to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simulated annealing, and (5) depth-two tree search within the Monte-Carlo framework. Progressive pruning and the all moves as first heuristic are good speed-up enhancements that do not deteriorate the level of the program too much. Then, using a constant temperature is an adequate and simple heuristic that is about as good as simulated annealing. The depth-two heuristic gives deceptive results at the moment. The results of our Monte-Carlo programs against knowledge-based programs on 9x9 boards are promising. Finally, the ever-increasing power of computers lead us to think that Monte-Carlo approaches are worth considering for computer Go in the future.
Chapter PDF
Similar content being viewed by others
References
Abramson, B. (1990). Expected-outcome: a general model of static evaluation. IEEE transactions on PAMI, Vol. 12, pp. 182–193.
Billings, D., Davidson, A., Schaeffer, J., and Szafron, D. (2002). The challenge of poker. Artificial Intelligence, Vol. 134, pp. 201–240.
Bouzy, B. (2002). Indigo home page. http://www.math-info.univ-paris5.fr/bouzy/INDIGO.html. Bouzy, B. (2003). The move decision process of Indigo. ICGA Journal, Vol. 26, No. 1, pp. 14–27.
Bouzy, B. and Cazenave, T. (2001). Computer Go: an AI oriented survey. Artificial Intelligence, Vol. 132, pp. 39–103.
Bruegmann, B. (1993). Monte Carlo Go. ftp://www.joy.ne.jp/welcome/igs/Go/computer/mcgo.tex.Z.
Bump, D. (2003). Gnugo home page. http://www.gnu.org/software/gnugo/devel.html.
Chen, K. and Chen, Z. (1999). Static analysis of life and death in the game of Go. Information Sciences, Vol. 121, Nos. 1–2, pp. 113–134.
Fishman (1996). Monte-Carlo: Concepts, Algorithms, Applications. Springer-Verlag, Berlin, Germany.
Fotland, D. (2002). Static Eye in “The Many Faces of Go”. ICGA Journal, Vol. 25, No. 4, pp. 203–210.
Junghanns, A. (1998). Are there Practical Alternatives to Alpha-Beta? ICCA Journal, Vol. 21, No. 1, pp. 14–32.
Kaminski, P. (2003). Vegos home page. http://www.ideanest.com/vegos/.
Kirkpatrick, S., Gelatt, C.D., and Vecchi, M.P. (1983). Optimization by Simulated Annealing. Science.
Rivest, R. (1988). Game-tree searching by min-max approximation. Artificial Intelligence, Vol. 34, No. 1, pp. 77–96.
Sheppard, B. (2002). World-championship-caliber Scrabble. Artificial Intelligence, Vol. 134, Nos. 1–2, pp. 241–275.
Tesauro, G. (2002). Programming backgammon using self-teaching neural nets. Artificial Intelligence, Vol. 134, Nos. 1–2, pp. 181–199.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 IFIP International Federation for Information Processing
About this chapter
Cite this chapter
Bouzy, B., Helmstetter, B. (2004). Monte-Carlo Go Developments. In: Van Den Herik, H.J., Iida, H., Heinz, E.A. (eds) Advances in Computer Games. IFIP — The International Federation for Information Processing, vol 135. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-35706-5_11
Download citation
DOI: https://doi.org/10.1007/978-0-387-35706-5_11
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4757-4424-8
Online ISBN: 978-0-387-35706-5
eBook Packages: Springer Book Archive