CAN AI RESIGN AT THE RIGHT TIME IN A GAME OF SHOGI?

Authors

  • Shize Pan School of Information Science, Japan Advanced Institute of Science and Technology
  • Hiroyuki Ida School of Information Science, Japan Advanced Institute of Science and Technology

DOI:

https://doi.org/10.46754/jmsi.2024.12.007

Keywords:

Shogi AI, Game refinement theory, Gravity in mind, Motion in mind

Abstract

This study introduces an innovative resignation mechanism to enhance shogi AI’s decision-making by identifying the optimal moment to resign. Through 300 self-play games per skill level, data on game length and branching factor were collected and analysed using game refinement theory and motion in mind techniques. The resignation threshold, defined as the maximum score of the losing player’s advantageous position plus a small value, prompts AI resignation when further alteration of the game’s outcome is improbable. Results indicate that implementing this mechanism significantly reduces game length compared with AI without it, bringing AI performance closer to human-level proficiency. Specifically, the average game length at skill level 20 decreased from 165.66 to 110 moves, while at skill levels 15, 10, and 5, the lengths reduced from 141.18, 133.97, and 121.17 to 112, 116, and 121 moves, respectively. Notably, gameplay speed, measured as the number of moves per unit time, also increased significantly after applying the resignation mechanism. Before its application, speed decreased with higher AI ability; however, post-application, speed increased with AI ability, underscoring the mechanism’s effectiveness in accelerating gameplay. The primary objective of this research is to enhance shogi AI’s decisionmaking capabilities, thus improving overall performance. By integrating the resignation mechanism, reliable data can be obtained for comparison with human players, contributing to advancements in game theory. In conclusion, introducing a resignation mechanism in shogi AI leads to smarter decisionmaking and more efficient gameplay. The findings of this study highlight the potential for improving AI performance in various board games and offer valuable insights into both AI decision-making processes and human gameplay strategies.

References

Huizinga, J. (2014). Homo ludens ILS 86. In Routledge eBook. https://doi.org/10.4324/9781315824161

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115-118. https://doi.org/10.1038/nature21056

Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140-1144. https://doi.org/10.1126/science.aar6404

McIlroy-Young, R., Sen, S., Kleinberg, J., & Anderson, A. (2020). Aligning superhuman ai with human behavior: Chess as a model system. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1677-1687).

Bhatt, U., & Sargeant, H. (2024). When Should Algorithms Resign? arXiv (Cornell University). https://doi.org/10.48550/arxiv.2402.18326

Wirth, C., & Fürnkranz, J. (2014). On learning from game annotations. IEEE Transactions on Computational Intelligence and AI in Games, 7(3), 304-316. https://doi.org/10.1109/tciaig.2014.2332442

Iida, H. (2003). A logistic model of game’s refinement (Technical Report). Department of Computer Science, Shizuoka University.

Sutiono, A. P., Purwarianti, A., & Iida, H. (2014). A mathematical model of game refinement. In International Conference on Intelligent Technologies for Interactive Entertainment, pp. 148-151.

Iida, H., Takahara, K., Nagashima, J., Kajihara, Y., & Hashimoto, T. (2004). An application of game-refinement theory to mahjong. In International Conference on Entertainment Computing, pp. 333-338.

Yicong, W., Aung, H. P. P., Khalid, M. N. A. & Iida, H. (2019). Evolution of games towards the discovery of noble uncertainty. In 2019 International Conference on Advanced Information Technologies (ICAIT), pp. 72-77.

Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., & Colton, S. (2012). A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 1-43.

Downloads

Published

31-12-2024