Algebraic decision diagrams and their applications
ICCAD '93 Proceedings of the 1993 IEEE/ACM international conference on Computer-aided design
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Towars a Theory of Stochastic Hybrid Systems
HSCC '00 Proceedings of the Third International Workshop on Hybrid Systems: Computation and Control
Dynamic programming for structured continuous Markov decision problems
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Lazy approximation for solving continuous finite-horizon MDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Solving factored MDPs with hybrid state and action variables
Journal of Artificial Intelligence Research
A heuristic search approach to planning with continuous resources in stochastic domains
Journal of Artificial Intelligence Research
A fast analytical algorithm for solving Markov decision processes with real-valued resources
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Symbolic dynamic programming for first-order MDPs
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
A probabilistic particle-control approximation of chance-constrained stochastic predictive control
IEEE Transactions on Robotics
Complexity of stability and controllability of elementary hybrid systems
Automatica (Journal of IFAC)
Brief A probabilistically constrained model predictive controller
Automatica (Journal of IFAC)
Chance-Constrained Optimal Path Planning With Obstacles
IEEE Transactions on Robotics
Hi-index | 0.00 |
Recent advances in solutions to Hybrid MDPs with discrete and continuous state and action spaces have significantly extended the class of MDPs for which exact solutions can be derived, albeit at the expense of a restricted transition noise model. In this paper, we work around limitations of previous solutions by adopting a robust optimization approach in which Nature is allowed to adversarially determine transition noise within pre-specified confidence intervals. This allows one to derive an optimal policy with an arbitrary (user-specified) level of success probability and significantly extends the class of transition noise models for which Hybrid MDPs can be solved. This work also significantly extends results for the related "chance-constrained" approach in stochastic hybrid control to accommodate state-dependent noise. We demonstrate our approach working on a variety of hybrid MDPs taken from AI planning, operations research, and control theory, noting that this is the first time robust solutions with strong guarantees over all states have been automatically derived for such problems.