Parameter imprecision in finite state, finite action dynamic programs
Operations Research
Real and complex analysis, 3rd ed.
Real and complex analysis, 3rd ed.
Bounded-parameter Markov decision process
Artificial Intelligence
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Computing Minimum and Maximum Reachability Times in Probabilistic Systems
CONCUR '99 Proceedings of the 10th International Conference on Concurrency Theory
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Reachability analysis of uncertain systems using bounded-parameter Markov decision processes
Artificial Intelligence
Hi-index | 0.00 |
Verification of probabilistic systems is usually based on variants of Markov processes. For systems with continuous dynamics, Markov processes are generated using discrete approximation methods. These methods assume an exact model of the dynamic behavior. However, realistic systems operate in the presence of uncertainty and variability and they are described by uncertain models. In this paper, we address the problem of probabilistic verification of uncertain systems using Bounded-parameter Markov Decision Processes (BMDPs). Proposed by Givan, Leach and Dean [1], BMDPs are a generalization of MDPs that allow modeling uncertainty. In this paper, we first show how discrete approximation methods can be extended for modeling uncertain systems using BMDPs. Then, we focus on the problem of maximizing the probability of reaching a set of desirable states, we develop a iterative algorithm for probabilistic verification, and we present a detailed mathematical analysis of the convergence results. Finally, we use a robot path-finding application to demonstrate the approach.