Recursive markov decision processes and recursive stochastic games

  • Authors:
  • Kousha Etessami;Mihalis Yannakakis

  • Affiliations:
  • LFCS, School of Informatics, University of Edinburgh;Department of Computer Science, Columbia University

  • Venue:
  • ICALP'05 Proceedings of the 32nd international conference on Automata, Languages and Programming
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce Recursive Markov Decision Processes (RMDPs) and Recursive Simple Stochastic Games (RSSGs), and study the decidability and complexity of algorithms for their analysis and verification. These models extend Recursive Markov Chains (RMCs), introduced in [EY05a, EY05b] as a natural model for verification of probabilistic procedural programs and related systems involving both recursion and probabilistic behavior. RMCs define a class of denumerable Markov chains with a rich theory generalizing that of stochastic context-free grammars and multi-type branching processes, and they are also intimately related to probabilistic pushdown systems. RMDPs & RSSGs extend RMCs with one controller or two adversarial players, respectively. Such extensions are useful for modeling nondeterministic and concurrent behavior, as well as modeling a system’s interactions with an environment. We provide upper and lower bounds for deciding, given an RMDP (or RSSG) A and probability p, whether player 1 has a strategy to force termination at a desired exit with probability at least p. We also address “qualitative” termination, where p=1, and model checking questions.