Collaborative topic modeling for recommending scientific articles
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Diversified ranking on large graphs: an optimization viewpoint
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
An analysis of probabilistic methods for top-N recommendation in collaborative filtering
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Feature enriched nonparametric bayesian co-clustering
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
App recommendation: a contest between satisfaction and temptation
Proceedings of the sixth ACM international conference on Web search and data mining
Co-factorization machines: modeling user interests and predicting individual decisions in Twitter
Proceedings of the sixth ACM international conference on Web search and data mining
Active learning and search on low-rank matrices
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Scientific articles recommendation
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Celebrity recommendation with collaborative social topic regression
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
A smart TV system with body-gesture control, tag-based rating and context-aware recommendation
Knowledge-Based Systems
Hi-index | 0.00 |
Probabilistic matrix factorization (PMF) methods have shown great promise in collaborative filtering. In this paper, we consider several variants and generalizations of PMF framework inspired by three broad questions: Are the prior distributions used in existing PMF models suitable, or can one get better predictive performance with different priors? Are there suitable extensions to leverage side information? Are there benefits to taking into account row and column biases? We develop new families of PMF models to address these questions along with efficient approximate inference algorithms for learning and prediction. Through extensive experiments on movie recommendation datasets, we illustrate that simpler models directly capturing correlations among latent factors can outperform existing PMF models, side information can benefit prediction accuracy, and accounting for row/column biases leads to improvements in predictive performance.