Parallel and Distributed Computation: Numerical Methods
Parallel and Distributed Computation: Numerical Methods
Machine Learning
On the Complexity of Learning Lexicographic Strategies
The Journal of Machine Learning Research
POIROT: acquiring workflows by combining models learned from interpreted traces
Proceedings of the fifth international conference on Knowledge capture
The complexity of learning separable ceteris paribus preferences
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Learning conditionally lexicographic preference relations
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Learning to ask the right questions to help a learner learn
Proceedings of the 16th international conference on Intelligent user interfaces
Preferences in AI: An overview
Artificial Intelligence
Aggregating conditionally lexicographic preferences on multi-issue domains
CP'12 Proceedings of the 18th international conference on Principles and Practice of Constraint Programming
Hi-index | 0.00 |
Previous algorithms for learning lexicographic preference models (LPMs) produce a "best guess" LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method---variable voting and model voting---and empirically show that these democratic algorithms outperform the existing methods. We also introduce an intuitive yet powerful learning bias to prune some of the possible LPMs. We demonstrate how this learning bias can be used with variable and model voting and show that the learning bias improves the learning curve significantly, especially when the number of observations is small.