Multi-objective model checking of Markov decision processes

  • Authors:
  • K. Etessami;M. Kwiatkowska;M. Y. Vardi;M. Yannakakis

  • Affiliations:
  • LFCS, School of Informatics, University of Edinburgh;School of Computer Science, Birmingham University;Dept. of Computer Science, Rice University;Dept. of Computer Science, Columbia University

  • Venue:
  • TACAS'07 Proceedings of the 13th international conference on Tools and algorithms for the construction and analysis of systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study and provide efficient algorithms for multi-objective model checking problems for Markov Decision Processes (MDPs). Given an MDP, M, and given multiple linear-time (ω-regular or LTL) properties ϕi, and probabilities ri ∈ [0, 1], i = 1, . . . , k, we ask whether there exists a strategy α for the controller such that, for all i, the probability that a trajectory of M controlled by α satisfies ϕi is at least ri. We provide an algorithm that decides whether there exists such a strategy and if so produces it, and which runs in time polynomial in the size of the MDP. Such a strategy may require the use of both randomization and memory. We also consider more general multi-objective ω-regular queries, which we motivate with an application to assume-guarantee compositional reasoning for probabilistic systems. Note that there can be trade-offs between different properties: satisfying property ϕ1 with high probability may necessitate satisfying ϕ2 with low probability. Viewing this as a multi-objective optimization problem, we want information about the "trade-off curve" or Pareto curve for maximizing the probabilities of different properties. We show that one can compute an approximate Pareto curve with respect to a set of ω-regular properties in time polynomial in the size of the MDP. Our quantitative upper bounds use LP methods. We also study qualitative multi-objective model checking problems, and we show that these can be analysed by purely graph-theoretic methods, even though the strategies may still require both randomization and memory.