Neuro-Dynamic Programming
Convex Optimization
Mathematics of Operations Research
Worst-case distribution analysis of stochastic programs
Mathematical Programming: Series A and B
Robust Mean-Covariance Solutions for Stochastic Optimization
Operations Research
Bias and Variance Approximation in Value Function Estimates
Management Science
Robust Control of Markov Decision Processes with Uncertain Transition Matrices
Operations Research
Percentile Optimization for Markov Decision Processes with Parameter Uncertainty
Operations Research
Distributionally Robust Optimization and Its Tractable Approximations
Operations Research
Paper: Optimal control of markov chains admitting strong and weak interactions
Automatica (Journal of IFAC)
Robust solutions of uncertain linear programs
Operations Research Letters
Hi-index | 0.00 |
We consider Markov decision processes where the values of the parameters are uncertain. This uncertainty is described by a sequence of nested sets (that is, each set contains the previous one), each of which corresponds to a probabilistic guarantee for a different confidence level. Consequently, a set of admissible probability distributions of the unknown parameters is specified. This formulation models the case where the decision maker is aware of and wants to exploit some (yet imprecise) a priori information of the distribution of parameters, and it arises naturally in practice where methods for estimating the confidence region of parameters abound. We propose a decision criterion based on distributional robustness: the optimal strategy maximizes the expected total reward under the most adversarial admissible probability distributions. We show that finding the optimal distributionally robust strategy can be reduced to the standard robust MDP where parameters are known to belong to a single uncertainty set; hence, it can be computed in polynomial time under mild technical conditions.