Matrix co-factorization for recommendation with rich side information and implicit feedback
Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems
User graph regularized pairwise matrix factorization for item recommendation
ADMA'11 Proceedings of the 7th international conference on Advanced Data Mining and Applications - Volume Part II
Co-embedding of structurally missing data by locally linear alignment
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
TFMAP: optimizing MAP for top-n context-aware recommendation
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Climbing the app wall: enabling mobile app discovery through context-aware recommendations
Proceedings of the 21st ACM international conference on Information and knowledge management
Multi-label learning with millions of labels: recommending advertiser bid phrases for web pages
Proceedings of the 22nd international conference on World Wide Web
Probabilistic topic models for sequence data
Machine Learning
Hi-index | 0.01 |
Consider a typical recommendation problem. A company has historical records of products sold to a large customer base. These records may be compactly represented as a sparse customer-times-product ``who-bought-what" binary matrix. Given this matrix, the goal is to build a model that provides recommendations for which products should be sold next to the existing customer base. Such problems may naturally be formulated as collaborative filtering tasks. However, this is a {\it one-class} setting, that is, the only known entries in the matrix are one-valued. If a customer has not bought a product yet, it does not imply that the customer has a low propensity to {\it potentially} be interested in that product. In the absence of entries explicitly labeled as negative examples, one may resort to considering unobserved customer-product pairs as either missing data or as surrogate negative instances. In this paper, we propose an approach to explicitly deal with this kind of ambiguity by instead treating the unobserved entries as optimization variables. These variables are optimized in conjunction with learning a weighted, low-rank non-negative matrix factorization (NMF) of the customer-product matrix, similar to how Transductive SVMs implement the low-density separation principle for semi-supervised learning. Experimental results show that our approach gives significantly better recommendations in comparison to various competing alternatives on one-class collaborative filtering tasks.