Dynamic Programming and Stochastic Control
Dynamic Programming and Stochastic Control
Aircraft Dynamics and Automatic Control
Aircraft Dynamics and Automatic Control
PEGASUS: A policy search method for large MDPs and POMDPs
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Planning and Acting in Partially Observable Stochastic Domains
Planning and Acting in Partially Observable Stochastic Domains
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Simulation-based optimal sensor scheduling with application to observer trajectory planning
Automatica (Journal of IFAC)
Stochastic programming approach to optimization under uncertainty
Mathematical Programming: Series A and B
Robust, optimal predictive control of jump Markov linear systems using particles
HSCC'07 Proceedings of the 10th international conference on Hybrid systems: computation and control
Policy-contingent abstraction for robust robot control
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Monte Carlo Optimization for Conflict Resolution in Air Traffic Control
IEEE Transactions on Intelligent Transportation Systems
Randomized algorithms for robust controller synthesis using statistical learning theory
Automatica (Journal of IFAC)
Brief A probabilistically constrained model predictive controller
Automatica (Journal of IFAC)
Survey of Motion Planning Literature in the Presence of Uncertainty: Considerations for UAV Guidance
Journal of Intelligent and Robotic Systems
A Probabilistically Robust Path Planning Algorithm for UAVs Using Rapidly-Exploring Random Trees
Journal of Intelligent and Robotic Systems
Robust optimization for hybrid MDPs with state-dependent noise
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Robotic systems need to be able to plan control actions that are robust to the inherent uncertainty in the real world. This uncertainty arises due to uncertain state estimation, disturbances, and modeling errors, as well as stochastic mode transitions such as component failures. Chance-constrained control takes into account uncertainty to ensure that the probability of failure, due to collision with obstacles, for example, is below a given threshold. In this paper, we present a novel method for chance-constrained predictive stochastic control of dynamic systems. The method approximates the distribution of the system state using a finite number of particles. By expressing these particles in terms of the control variables, we are able to approximate the original stochastic control problem as a deterministic one; furthermore, the approximation becomes exact as the number of particles tends to infinity. This method applies to arbitrary noise distributions, and for systems with linear or jump Markov linear dynamics, we show that the approximate problem can be solved using efficient mixed-integer linearprogramming techniques. We also introduce an important weighting extension that enables the method to deal with low-probability mode transitions such as failures. We demonstrate in simulation that the new method is able to control an aircraft in turbulence and can control a ground vehicle while being robust to brake failures.