Parallel and Distributed Computation: Numerical Methods
Parallel and Distributed Computation: Numerical Methods
Machine Learning
On the Complexity of Learning Lexicographic Strategies
The Journal of Machine Learning Research
Journal of Artificial Intelligence Research
Journal of Artificial Intelligence Research
Relational macros for transfer in reinforcement learning
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Learning conditional preference network from noisy samples using hypothesis testing
Knowledge-Based Systems
Hi-index | 0.00 |
Lexicographic preference models (LPMs) are an intuitive representation that corresponds to many real-world preferences exhibited by human decision makers. Previous algorithms for learning LPMs produce a ''best guess'' LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method-variable voting and model voting-and empirically show that these democratic algorithms outperform the existing methods. Versions of these democratic algorithms are presented in both the case where the preferred values of attributes are known and the case where they are unknown. We also introduce an intuitive yet powerful form of background knowledge to prune some of the possible LPMs. We demonstrate how this background knowledge can be incorporated into variable and model voting and show that doing so improves performance significantly, especially when the number of observations is small.