Technical Note: \cal Q-Learning
Machine Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
SIAM Journal on Control and Optimization
Non-parametric policy gradients: a unified treatment of propositional and relational domains
Proceedings of the 25th international conference on Machine learning
Rollout sampling approximate policy iteration
Machine Learning
Label ranking by learning pairwise preferences
Artificial Intelligence
Journal of Artificial Intelligence Research
Natural actor-critic algorithms
Automatica (Journal of IFAC)
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Preference Learning
Learning from label preferences
DS'11 Proceedings of the 14th international conference on Discovery science
APRIL: active preference learning-based reinforcement learning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
Monte carlo methods for preference learning
LION'12 Proceedings of the 6th international conference on Learning and Intelligent Optimization
Combining fitness-based search and user modeling in evolutionary robotics
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a "preference-based" approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preference-based policy iteration method are illustrated by means of two case studies.