Abstract
Games are considered important benchmark opportunities for artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has a considerable playing strength when compared to game implementation with existing heuristics. So, we may conclude that MCTS is a suitable tool for achieving a strong Settlers of Catan player.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Grabinger, R., Dunlap, J.: Rich environments for active learning: a definition. Association for Learning Technology Journal 3(2), 5–34 (1995)
Singh, S.P., Barto, A.G., Chentanez, N.: Intrinsically motivated reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 17 (2005)
Dekker, S., van den Herik, H., Herschberg, I.: Perfect knowledge revisited. Artificial Intelligence 43(1), 111–123 (1990)
Laird, J., van Lent, M.: Human-level AI’s killer application: Interactive computer games. AI Magazine 22(2), 15–26 (2001)
Sawyer, B.: Serious games: Improving public policy through game-based learning and simulation. Foresight and Governance Project, Woodrow Wilson International Center for Scholars Publication 1 (2002)
Schaeffer, J., van den Herik, H.: Games, computers, and artificial intelligence. Artificial Intelligence 134, 1–7 (2002)
Caldera, Y., Culp, A., O’Brien, M., Truglio, R., Alvarez, M., Huston, A.: Children’s play preferences, construction play with blocks, and visual-spatial skills: Are they related? International Journal of Behavioral Development 23(4), 855–872 (1999)
Huitt, W.: Cognitive development: Applications. Educational Psychology Interactive (1997)
van den Herik, H., Iida, H. (eds.): Games in AI Research, Van Spijk, Venlo, The Netherlands (2000)
van den Herik, H.J., Uiterwijk, J.W.H.M., van Rijswijck, J.: Games solved: Now and in the future. Artificial Intelligence 134, 277–311 (2002)
Marsland, T.A.: Computer chess methods. In: Shapiro, S. (ed.) Encyclopedia of Artificial Intelligence, pp. 157–171. J. Wiley & Sons, Chichester (1987)
Pfeiffer, M.: Reinforcement learning of strategies for settlers of catan. In: Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education (2004)
Thomas, R.: Real-time Decision Making for Adversarial Environments Using a Plan-based Heuristic. PhD thesis, Northwestern University, Evanston, Illinois (2003)
Billings, D., Davidson, A., Schaeffer, J., Szafron, D.: The challenge of poker. Artificial Intelligence 134(1), 201–240 (2002)
Sheppard, B.: World-championship-caliber scrabble. Artificial Intelligence 134(1), 241–275 (2002)
Kocsis, L., Szepesvári, C.: Bandit based monte-carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006)
Chaslot, G., Saito, J., Bouzy, B., Uiterwijk, J., van den Herik, H.: Monte-carlo strategies for computer go. In: Proceedings of the 18th BeNeLux Conference on Artificial Intelligence, pp. 83–90 (2006)
Chaslot, G., Winands, M., van den Herik, H., Uiterwijk, J., Bouzy, B.: Progressive strategies for monte-carlo tree search. New Mathematics and Natural Computation 4(3), 343 (2008)
Gelly, S., Wang, Y.: Exploration exploitation in go: UCT for monte-carlo go. In: NIPS-2006: On-line trading of Exploration and Exploitation Workshop (2006)
Bouzy, B., Chaslot, G.: Monte-Carlo go reinforcement learning experiments. In: IEEE 2006 Symposium on Computational Intelligence in Games, pp. 187–194 (2006)
Chatriot, L., Gelly, S., Jean-Baptiste, H., Perez, J., Rimmel, A., Teytaud, O.: Including expert knowledge in bandit-based monte-carlo planning, with application to computer-go. In: Girgin, S., Loth, M., Munos, R., Preux, P., Ryabko, D. (eds.) EWRL 2008. LNCS (LNAI), vol. 5323. Springer, Heidelberg (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Szita, I., Chaslot, G., Spronck, P. (2010). Monte-Carlo Tree Search in Settlers of Catan. In: van den Herik, H.J., Spronck, P. (eds) Advances in Computer Games. ACG 2009. Lecture Notes in Computer Science, vol 6048. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12993-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-12993-3_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-12992-6
Online ISBN: 978-3-642-12993-3
eBook Packages: Computer ScienceComputer Science (R0)