Nantonac collaborative filtering: recommendation based on order responses
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Preference learning with Gaussian processes
ICML '05 Proceedings of the 22nd international conference on Machine learning
Mixed Membership Stochastic Blockmodels
The Journal of Machine Learning Research
Fast active exploration for link-based preference learning using Gaussian processes
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Preference Learning
Utilities as random variables: density estimation and structure discovery
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Expectation propagation for approximate Bayesian inference
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Bayesian approaches to preference learning using Gaussian Processes (GPs) are attractive due to their ability to explicitly model uncertainty in users' latent utility functions; unfortunately existing techniques have cubic time complexity in the number of users, which renders this approach intractable for collaborative preference learning over a large user base. Exploiting the observation that user populations often decompose into communities of shared preferences, we model user preferences as an infinite Dirichlet Process (DP) mixture of communities and learn (a) the expected number of preference communities represented in the data, (b) a GP-based preference model over items tailored to each community, and (c) the mixture weights representing each user's fraction of community membership. This results in a learning and inference process that scales linearly in the number of users rather than cubicly and additionally provides the ability to analyze individual community preferences and their associated members. We evaluate our approach on a variety of preference data sources including Amazon Mechanical Turk showing that our method is more scalable and as accurate as previous GP-based preference learning work.