Skip to main content

A Further Investigation of Neural Network Players for Game 2048

  • Conference paper
  • First Online:
Advances in Computer Games (ACG 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12516))

Included in the following conference series:

Abstract

Game 2048 is a stochastic single-player game. Development of strong computer players for Game 2048 has been based on N-tuple networks trained by reinforcement learning. Some computer players were developed with neural networks, but their performance was poor. In our previous work, we showed that we can develop better policy-network players by supervised learning. In this study, we further investigate neural-network players for Game 2048 in two aspects. Firstly, we focus on the component (i.e., layers) of the networks and achieve better performance in a similar setting. Secondly, we change input and/or output of the networks for better performance. The best neural-network player achieved average score 215 803 without search techniques, which is comparable to N-tuple-network players.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    All the experiments were conducted on a single PC with an Intel Core i3-8100 CPU, 16 GB Memory, and a GeForce GTX 1080 Ti GPU (GPU Memory 11 GB).

  2. 2.

    The terms beforestate and afterstate are from Szubert and Jaśkowski [12].

  3. 3.

    But the factor was less than 3.6.

  4. 4.

    The playing time of player Policy AS was about 5 000 times as long as that of N-tuple-network players.

References

  1. Cirulli, G.: 2048 (2014). http://gabrielecirulli.github.io/2048/

  2. David, O.E., Netanyahu, N.S., Wolf, L.: DeepChess: end-to-end deep neural network for automatic learning in chess. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016, Part II. LNCS, vol. 9887, pp. 88–96. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44781-0_11

    Chapter  Google Scholar 

  3. Fujita, R., Matsuzaki, K.: Improving 2048 player with supervised learning. In: Proceedings of 6th International Symposium on Frontier Technology, pp. 353–357 (2017)

    Google Scholar 

  4. Guei, H., Wei, T., Huang, J.B., Wu, I.C.: An early attempt at applying deep reinforcement learning to the game 2048. In: Workshop on Neural Networks in Games (2016)

    Google Scholar 

  5. Jaśkowski, W.: Mastering 2048 with delayed temporal coherence learning, multi-stage weight promotion, redundant encoding and carousel shaping. IEEE Trans. Comput. Intell. AI Games 10(1), 3–14 (2018)

    Article  Google Scholar 

  6. Kondo, N., Matsuzaki, K.: Playing game 2048 with deep convolutional neural networks trained by supervised learning. J. Inf. Process. 27, 340–347 (2019)

    Google Scholar 

  7. Lai, M.: Giraffe: Using Deep Reinforcement Learning to Play Chess. Master’s thesis, Imperial College London (2015). arXiv:1509.01549v1

  8. Matsuzaki, K.: Developing a 2048 player with backward temporal coherence learning and restart. In: Winands, M.H.M., van den Herik, H.J., Kosters, W.A. (eds.) ACG 2017. LNCS, vol. 10664, pp. 176–187. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71649-7_15

    Chapter  Google Scholar 

  9. Samir, M.: An attempt at applying Deep RL on the board game 2048 (2017). https://github.com/Mostafa-Samir/2048-RL-DRQN

  10. Silver, D., et al.: Mastering Chess and Shogi by self-play with a general reinforcement learning algorithm. arXiv 1712.01815 (2017)

  11. Silver, D.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)

    Article  Google Scholar 

  12. Szubert, M., Jaśkowski, W.: Temporal difference learning of N-tuple networks for the game 2048. In: 2014 IEEE Conference on Computational Intelligence and Games, pp. 1–8 (2014)

    Google Scholar 

  13. tjwei: A deep learning AI for 2048 (2016). https://github.com/tjwei/2048-NN

  14. Virdee, N.: Trained a convolutional neural network to play 2048 using deep-reinforcement learning (2018). https://github.com/navjindervirdee/2048-deep-reinforcement-learning

  15. Yeh, K.-H., Wu, I.-C., Hsueh, C.-H., Chang, C.-C., Liang, C.-C., Chiang, H.: Multi-stage temporal difference learning for 2048-like games. IEEE Trans. Comput. Intell. AI Games 9(4), 369–380 (2016)

    Article  Google Scholar 

  16. Wiese, G.: 2048 reinforcement learning (2018). https://github.com/georgwiese/2048-rl

  17. Xiao, R.: nneonneo/2048-ai (2015). https://github.com/nneonneo/2048-ai

Download references

Acknowledgment

The training data acg17 and nneo used in this study were generated with the support of the IACP cluster in Kochi University of Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kiminori Matsuzaki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Matsuzaki, K. (2020). A Further Investigation of Neural Network Players for Game 2048. In: Cazenave, T., van den Herik, J., Saffidine, A., Wu, IC. (eds) Advances in Computer Games. ACG 2019. Lecture Notes in Computer Science(), vol 12516. Springer, Cham. https://doi.org/10.1007/978-3-030-65883-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65883-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65882-3

  • Online ISBN: 978-3-030-65883-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics