Risk-Averse Two-Stage Stochastic Linear Programming: Modeling and Decomposition
Operations Research
A dynamic programming approach to adjustable robust optimization
Operations Research Letters
A decision support system for mean-variance analysis in multi-period inventory control
Decision Support Systems
Hi-index | 0.00 |
We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we develop a risk-averse policy iteration method and we prove its convergence. We also propose a version of the Newton method to solve a nonsmooth equation arising in the policy iteration method and we prove its global convergence. Finally, we discuss relations to min–max Markov decision models.