Minimax search and reinforcement learning for adversarial tetris

  • Authors:
  • Maria Rovatsou;Michail G. Lagoudakis

  • Affiliations:
  • Intelligent Systems Laboratory Department of Electronic and Computer Engineering, Technical University of Crete, Chania, Crete, Greece;Intelligent Systems Laboratory Department of Electronic and Computer Engineering, Technical University of Crete, Chania, Crete, Greece

  • Venue:
  • SETN'10 Proceedings of the 6th Hellenic conference on Artificial Intelligence: theories, models and applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Game playing has always been considered an intellectual activity requiring a good level of intelligence This paper focuses on Adversarial Tetris, a variation of the well-known Tetris game, introduced at the 3rd International Reinforcement Learning Competition in 2009 In Adversarial Tetris the mission of the player to complete as many lines as possible is actively hindered by an unknown adversary who selects the falling tetraminoes in ways that make the game harder for the player In addition, there are boards of different sizes and learning ability is tested over a variety of boards and adversaries This paper describes the design and implementation of an agent capable of learning to improve his strategy against any adversary and any board size The agent employs MiniMax search enhanced with Alpha-Beta pruning for looking ahead within the game tree and a variation of the Least-Squares Temporal Difference Learning (LSTD) algorithm for learning an appropriate state evaluation function over a small set of features The learned strategies exhibit good performance over a wide range of boards and adversaries.