Robust adaptive control
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Adaptive Control
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Mathematics of Operations Research
Bias and Variance Approximation in Value Function Estimates
Management Science
Robust Control of Markov Decision Processes with Uncertain Transition Matrices
Operations Research
Learning to act using real-time dynamic programming
Artificial Intelligence
Active learning in partially observable markov decision processes
ECML'05 Proceedings of the 16th European conference on Machine Learning
Adaptive estimation of HMM transition probabilities
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
This paper presents a new robust and adaptive framework for Markov Decision Processes that accounts for errors in the transition probabilities. Robust policies are typically found off-line, but can be extremely conservative when implemented in the real system. Adaptive policies, on the other hand, are specifically suited for on-line implementation, but may display undesirable transient performance as the model is updated though learning. A new method that exploits the individual strengths of the two approaches is presented in this paper. This robust and adaptive framework protects the adaptation process from exhibiting a worst-case performance during the model updating, and is shown to converge to the true, optimal value function in the limit of a large number of state transition observations. The proposed framework is investigated in simulation and actual flight experiments, and shown to improve transient behavior in the adaptation process and overall mission performance.