Learning to play strong poker

  • Authors:
  • Darse Billings;Lourdes Peña;Jonathan Schaeffer;Duane Szafron

  • Affiliations:
  • University of Alberta, Department of Computing Science, Edmonton, Alberta, T6G 2H1, Canada;Otto-von-Guericke-Universität, School of Computer Science / IWS Universitätsplatz 2, D-106 Magdeburg, Germany;University of Alberta, Department of Computing Science, Edmonton, Alberta, T6G 2H1, Canada;University of Alberta, Department of Computing Science, Edmonton, Alberta, T6G 2H1, Canada

  • Venue:
  • Machines that learn to play games
  • Year:
  • 2001
  • The challenge of poker

    Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence

Quantified Score

Hi-index 0.00

Visualization

Abstract

Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, opponent modeling, unreliable information, and deception, much like decision-making applications in the real world. Opponent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This chapter describes and evaluates the implicit and explicit learning in the poker program LOKI. LOKI implicitly "learns" sophisticated strategies by selectively sampling likely cards for the opponents and then simulating the remainder of the game. The program has explicit learning by observing its opponents, constructing opponent models and dynamically adapting its play to exploit patterns in the opponents' play. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at a world-class level.