The complexity of Markov decision processes
Mathematics of Operations Research
Model-checking for probabilistic real-time systems (extended abstract)
Proceedings of the 18th international colloquium on Automata, languages and programming
Handbook of theoretical computer science (vol. B)
Non-randomized strategies in stochastic decision processes
Annals of Operations Research
On the complexity of partially observed Markov decision processes
Theoretical Computer Science - Special issue on complexity theory and the theory of algorithms as developed in the CIS
Dynamic Programming and Stochastic Control
Dynamic Programming and Stochastic Control
POPL '83 Proceedings of the 10th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
On the Complexity of Finite Memory Policies for Markov Decision Processes
MFCS '95 Proceedings of the 20th International Symposium on Mathematical Foundations of Computer Science
Proceedings of the 12th Colloquium on Automata, Languages and Programming
Model Checking of Probabalistic and Nondeterministic Systems
Proceedings of the 15th Conference on Foundations of Software Technology and Theoretical Computer Science
Verifying Automata Specifications of Probabilistic Real-time Systems
Proceedings of the Real-Time: Theory in Practice, REX Workshop
Complexity Issues in Markov Decision Processes
COCO '98 Proceedings of the Thirteenth Annual IEEE Conference on Computational Complexity
Model checking for a probabilistic branching time logic with fairness
Distributed Computing
A Decidable Probability Logic for Timed Probabilistic Systems
Fundamenta Informaticae
Hi-index | 0.00 |
We prove that given a Markov Decision Process (MDP) and a fixed subset of its states F, there is a Markov policy which maximizes everywhere the probability to reach F infinitely often. Moreover such a maximum policy is computable in polytime in the size of the MDP. This result can be applied in order to control a system with randomized or uncertain behavior with respect to a given property to optimize.