Graph-Based Algorithms for Boolean Function Manipulation
IEEE Transactions on Computers
An analysis of stochastic shortest path problems
Mathematics of Operations Research
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Model-Checking: A Tutorial Introduction
SAS '99 Proceedings of the 6th International Symposium on Static Analysis
Planning via Model Checking: A Decision Procedure for AR
ECP '97 Proceedings of the 4th European Conference on Planning: Recent Advances in AI Planning
ECP '99 Proceedings of the 5th European Conference on Planning: Recent Advances in AI Planning
Strong Cyclic Planning Revisited
ECP '99 Proceedings of the 5th European Conference on Planning: Recent Advances in AI Planning
Planning with a language for extended goals
Eighteenth national conference on Artificial intelligence
Automated Planning: Theory & Practice
Automated Planning: Theory & Practice
A logic-based agent that plans for extended reachability goals
Autonomous Agents and Multi-Agent Systems
A hybridized planner for stochastic domains
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Shortest stochastic path with risk sensitive evaluation
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Hi-index | 0.00 |
We consider the problem of synthesizing policies, in domains where actions have probabilisticeffects, that are optimal in the expected-caseamong the optimal worst-case strongpolicies. Thus we combine features from nondeterministic and probabilistic planning in a single framework. We present an algorithm that combines dynamic programming and model checking techniques to find plans satisfying the problem requirements: the strong preimage computation from model checking is used to avoid actions that lead to cycles or dead ends, reducing the model to a Markov Decision Process where all possible policies are strong and worst-case optimal (i.e., successful and minimum length with probability 1). We show that backward induction can then be used to select a policy in this reduced model. The resulting algorithm is presented in two versions (enumerative and symbolic); we show that the latter version allows planning with extended reachability goals.