Abstract
Game 2048 is a stochastic single-player game. Development of strong computer players for Game 2048 has been based on N-tuple networks trained by reinforcement learning. Some computer players were developed with neural networks, but their performance was poor. In our previous work, we showed that we can develop better policy-network players by supervised learning. In this study, we further investigate neural-network players for Game 2048 in two aspects. Firstly, we focus on the component (i.e., layers) of the networks and achieve better performance in a similar setting. Secondly, we change input and/or output of the networks for better performance. The best neural-network player achieved average score 215 803 without search techniques, which is comparable to N-tuple-network players.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
All the experiments were conducted on a single PC with an Intel Core i3-8100 CPU, 16 GB Memory, and a GeForce GTX 1080 Ti GPU (GPU Memory 11 GB).
- 2.
The terms beforestate and afterstate are from Szubert and Jaśkowski [12].
- 3.
But the factor was less than 3.6.
- 4.
The playing time of player Policy AS was about 5 000 times as long as that of N-tuple-network players.
References
Cirulli, G.: 2048 (2014). http://gabrielecirulli.github.io/2048/
David, O.E., Netanyahu, N.S., Wolf, L.: DeepChess: end-to-end deep neural network for automatic learning in chess. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016, Part II. LNCS, vol. 9887, pp. 88–96. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44781-0_11
Fujita, R., Matsuzaki, K.: Improving 2048 player with supervised learning. In: Proceedings of 6th International Symposium on Frontier Technology, pp. 353–357 (2017)
Guei, H., Wei, T., Huang, J.B., Wu, I.C.: An early attempt at applying deep reinforcement learning to the game 2048. In: Workshop on Neural Networks in Games (2016)
Jaśkowski, W.: Mastering 2048 with delayed temporal coherence learning, multi-stage weight promotion, redundant encoding and carousel shaping. IEEE Trans. Comput. Intell. AI Games 10(1), 3–14 (2018)
Kondo, N., Matsuzaki, K.: Playing game 2048 with deep convolutional neural networks trained by supervised learning. J. Inf. Process. 27, 340–347 (2019)
Lai, M.: Giraffe: Using Deep Reinforcement Learning to Play Chess. Master’s thesis, Imperial College London (2015). arXiv:1509.01549v1
Matsuzaki, K.: Developing a 2048 player with backward temporal coherence learning and restart. In: Winands, M.H.M., van den Herik, H.J., Kosters, W.A. (eds.) ACG 2017. LNCS, vol. 10664, pp. 176–187. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71649-7_15
Samir, M.: An attempt at applying Deep RL on the board game 2048 (2017). https://github.com/Mostafa-Samir/2048-RL-DRQN
Silver, D., et al.: Mastering Chess and Shogi by self-play with a general reinforcement learning algorithm. arXiv 1712.01815 (2017)
Silver, D.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)
Szubert, M., Jaśkowski, W.: Temporal difference learning of N-tuple networks for the game 2048. In: 2014 IEEE Conference on Computational Intelligence and Games, pp. 1–8 (2014)
tjwei: A deep learning AI for 2048 (2016). https://github.com/tjwei/2048-NN
Virdee, N.: Trained a convolutional neural network to play 2048 using deep-reinforcement learning (2018). https://github.com/navjindervirdee/2048-deep-reinforcement-learning
Yeh, K.-H., Wu, I.-C., Hsueh, C.-H., Chang, C.-C., Liang, C.-C., Chiang, H.: Multi-stage temporal difference learning for 2048-like games. IEEE Trans. Comput. Intell. AI Games 9(4), 369–380 (2016)
Wiese, G.: 2048 reinforcement learning (2018). https://github.com/georgwiese/2048-rl
Xiao, R.: nneonneo/2048-ai (2015). https://github.com/nneonneo/2048-ai
Acknowledgment
The training data acg17 and nneo used in this study were generated with the support of the IACP cluster in Kochi University of Technology.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Matsuzaki, K. (2020). A Further Investigation of Neural Network Players for Game 2048. In: Cazenave, T., van den Herik, J., Saffidine, A., Wu, IC. (eds) Advances in Computer Games. ACG 2019. Lecture Notes in Computer Science(), vol 12516. Springer, Cham. https://doi.org/10.1007/978-3-030-65883-0_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-65883-0_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65882-3
Online ISBN: 978-3-030-65883-0
eBook Packages: Computer ScienceComputer Science (R0)