Planning and acting in partially observable stochastic domains

  • Authors:
  • Leslie Pack Kaelbling;Michael L. Littman;Anthony R. Cassandra

  • Affiliations:
  • Computer Science Department, Brown University, Box 1910, Providence, RI 02912-1910, USA and Computer Science Department, Brown University, Box 1910, Providence, RI 02912-1910, USA and Department o ...;Department of Computer Science, Duke University, Durham, NC 27708-0129, USA;Microelectronics and Computer Technology Corporation (MCC), 3500 West Balcones Center Drive, Austin, TX 78759-5398, USA

  • Venue:
  • Artificial Intelligence
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions.