[1] GRANTER S R, BECK A H, PAPKE D J. AlphaGo, deep learning, and the future of the human microscopist[J]. Archives of Pathology & Laboratory Medicine, 2017, 141(5): 619-621.
[2] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484-489.
[3] HUANG C M, GUO J H, SU K L. Based on short motion paths and artificial intelligence method for Chinese chess game[J]. Journal of Robotics Networking and Artificial Life, 2017, 4(2): 154-157.
[4] WANG F, HOU X, DUAN Z, et al. The perceptual differences between experienced Chinese chess players and novices: evidence from eye movement[J]. Acta Psychologica Sinica, 2016, 48(5): 457.
[5] TAO J, WU G, PAN X. Design and improvement of the pruning algorithm of the Chinese chess in the computer games[J]. The Journal of Engineering, 2020(13): 426-428.
[6] CHEN J C, TSENG W J, WU I C, et al. Comparison training for computer Chinese chess[J]. IEEE Transactions on Games, 2020, 12(2): 169-176.
[7] HE W, ZHAO W, JIANG Y. Application of Q-learningand RBF network in Chinese chess game system[J]. IOP Conference Series: Materials Science and Engineering, 2019, 677(2): 022101.
[8] SILVER D, SCHRITTWIESER J, SIMONYANK, et al. Mastering the game of Go without human knowledge[J]. Nature, 2017, 550(7676): 354-359.
[9] GOLDWASER A, THIELSCHER M. Deep reinforcement learning for general game playing[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 1701-1708.
[10] SOEJIMA Y, KISHIMOTO A, WATANABEO. Evaluating root parallelization in Go[J]. IEEE Transactions on Computational Intelligence & AI in Games, 2011, 2(4): 278-287.
[11] LIU S, CAO J, WANG Y, et al. Self-play reinforcement learning with comprehensive critic in computer games[J]. Neurocomputing, 2021, 449(18): 207-213.
[12] FAN S, ZHANG S, LIU J, et al. Power converter circuit design automation using parallel Monte Carlo tree search[J]. ACM Transactions on Design Automation of Electronic Systems, 2023, 28(2): 17-33.
[13] MILEWICZ R M, POULDING S. Scalable parallel model checking via Monte-Carlo tree search[J]. ACM SIGSOFT Software Engineering Notes, 2018, 42(4): 1-5.
[14] BROWNE C B, POWLEY E, WHITEHOUSE D, et al. A survey of Monte Carlo tree search methods[J]. IEEE Transactions on Computational Intelligence & AI in Games, 2012, 4(1): 1-43.
[15] ZHANG J, SUN X, ZHANG D, et al. Fittest survival: an enhancement mechanism for Monte Carlo tree search[J]. International Journal of Bio-Inspired Computation, 2021, 18(2): 122-130.
[16] BORY P. Deep new: the shifting narratives of artificial intelligence from deep blue to AlphaGo[J]. Convergence: The International Journal of Research into New Media Technologies, 2019, 25(4): 627-642.
[17] ALBA C, VICENTE G, EDWARD R N, et al. Improving Monte Carlo tree search with artificial neural networks without heuristics[J]. Applied Sciences, 2021, 11(5): 2056.
[18] MIRSOLEIMANI S A, HERIK J V D, PLAAT A, et al. Pipeline pattern for parallel MCTS[C]//Proceedings of the 10th International Conference on Agents and Artificial Intelligence, 2018: 614-621.
[19] STEINMETZ E S, GINI M. More trees or larger trees: parallelizing Monte Carlo tree search[J]. IEEE Transactions on Games, 2021, 13(3): 315-320.
[20] MIRSOLEIMANI S A, HERIK J V D, PLAAT A, et al. A lock-free algorithm for parallel MCTS[C]//Proceedings of the 10th International Conference on Agents and Artificial Intelligence, 2018: 589-598.
[21] VICTOR G, ROLANDO E N, VICENTE G, et al. Monte Carlo tree search as a tool for self-learning and teaching people to play complete information boardgames[J]. Electronics, 2021, 10(21): 2609.
[22] SILVER D, HUBERT T, SCHRITTWIESER J, et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play[J]. Science, 2018, 362(6419): 1140-1144.
[23] MIRSOLEIMANI S A, HERIK J V D, PLAAT A, et al. An analysis of virtual loss in parallel MCTS[C]//Proceedings of the 9th International Conference on Agents and Artificial Intelligence, 2017: 648-652.
[24] ZHU X, LUO Y, LIU A, et al. A deep reinforcement learning-based resource management game in vehicular edge computing[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(3): 2422-2433.
[25] WANG L, ZHAO Y, JINNAI Y, et al. Neural architecture search using deep neural networks and Monte Carlo tree search[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 9983-9991.
[26] LIU P S, ZHOU J Z, LV J C. Exploring the first-movebalance point of Go-Moku based on reinforcement learning and Monte Carlo tree search[J]. Knowledge-Based Systems, 2023, 261(15): 110207.
[27] DONG P, LIU H C, LEI X. Monte Carlo tree search based non-coplanar trajectory design for station parameter optimized radiation therapy (SPORT)[J]. Physics in Medicine and Biology, 2018, 63(13): 135014.
[28] WANG Y F, WEI Y L, HUANG X L, et al. Robot navigation with predictive capabilities using graph learning and Monte Carlo tree search[J]. Proceedings of the Institution of Mechanical Engineers, 2023, 237(5): 805-814. |