Monte Carlo *-minimax search

  • Authors:
  • Marc Lanctot;Abdallah Saffidine;Joel Veness;Christopher Archibald;Mark H. M. Winands

  • Affiliations:
  • Department of Knowledge Engineering, Maastricht University, Netherlands;LAMSADE, Université Paris-Dauphine, France;Department of Computing Science, University of Alberta, Canada;Department of Computing Science, University of Alberta, Canada;Department of Knowledge Engineering, Maastricht University, Netherlands

  • Venue:
  • IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces Monte Carlo *-Minimax Search (MCMS), a Monte Carlo search algorithm for turned-based, stochastic, two-player, zero-sum games of perfect information. The algorithm is designed for the class of densely stochastic games; that is, games where one would rarely expect to sample the same successor state multiple times at any particular chance node. Our approach combines sparse sampling techniques from MDP planning with classic pruning techniques developed for adversarial expectimax planning. We compare and contrast our algorithm to the traditional *-Minimax approaches, as well as MCTS enhanced with the Double Progressive Widening, on four games: Pig, EinStein Würfelt Nicht!, Can't Stop, and Ra. Our results show that MCMS can be competitive with enhanced MCTS variants in some domains, while consistently outperforming the equivalent classic approaches given the same amount of thinking time.