Technical Note: \cal Q-Learning
Machine Learning
Markov decision models with weighted discounted criteria
Mathematics of Operations Research
Asynchronous Stochastic Approximation and Q-Learning
Machine Learning
An algorithm for probabilistic least-commitment planning
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Constrained discounted dynamic programming
Mathematics of Operations Research
Elevator Group Control Using Multiple Reinforcement Learning Agents
Machine Learning
Risk sensitive reinforcement learning
Proceedings of the 1998 conference on Advances in neural information processing systems II
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Risk-Sensitive Reinforcement Learning
Machine Learning
Multi-criteria Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Reinforcement Learning with Bounded Risk
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Q-Learning for Risk-Sensitive Control
Mathematics of Operations Research
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Introduction to Probability Models, Ninth Edition
Introduction to Probability Models, Ninth Edition
Approximating optimal policies for agents with limited execution resources
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Speeding safely: multi-criteria optimization in probabilistic planning
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Brief Risk-sensitive and minimax control of discrete-time, finite-state Markov decision processes
Automatica (Journal of IFAC)
On the Limitations of Scalarisation for Multi-objective Reinforcement Learning of Pareto Fronts
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
On step sizes, stochastic shortest paths, and survival probabilities in reinforcement learning
Proceedings of the 40th Conference on Winter Simulation
Reinforcement Learning: A Tutorial Survey and Recent Advances
INFORMS Journal on Computing
A new marketing strategy map for direct marketing
Knowledge-Based Systems
Reinforcement learning for model building and variance-penalized control
Winter Simulation Conference
Handling camera movement constraints in reinforcement learning based active object recognition
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Reinforcement learning for MDPs with constraints
ECML'06 Proceedings of the 17th European conference on Machine Learning
Compound reinforcement learning: theory and an application to finance
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Reinforcement learning approach to multi-stage decision making problems with changes in action sets
Artificial Life and Robotics
Information Sciences: an International Journal
Safe exploration of state and action spaces in reinforcement learning
Journal of Artificial Intelligence Research
Probabilistic planning for continuous dynamic systems under bounded risk
Journal of Artificial Intelligence Research
Journal of Intelligent and Robotic Systems
A survey of multi-objective sequential decision-making
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed.