EigenRank: a ranking-oriented approach to collaborative filtering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
One-Class Collaborative Filtering
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
BPR: Bayesian personalized ranking from implicit feedback
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
List-wise learning to rank with matrix factorization for collaborative filtering
Proceedings of the fourth ACM conference on Recommender systems
TFMAP: optimizing MAP for top-n context-aware recommendation
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Local implicit feedback mining for music recommendation
Proceedings of the sixth ACM conference on Recommender systems
CLiMF: learning to maximize reciprocal rank with collaborative less-is-more filtering
Proceedings of the sixth ACM conference on Recommender systems
Serendipitous Personalized Ranking for Top-N Recommendation
WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01
Hi-index | 0.00 |
Collaborative filtering techniques rely on aggregated user preference data to make personalized predictions. In many cases, users are reluctant to explicitly express their preferences and many recommender systems have to infer them from implicit user behaviors, such as clicking a link in a webpage or playing a music track. The clicks and the plays are good for indicating the items a user liked (i.e., positive training examples), but the items a user did not like (negative training examples) are not directly observed. Previous approaches either randomly pick negative training samples from unseen items or incorporate some heuristics into the learning model, leading to a biased solution and a prolonged training period. In this paper, we propose to dynamically choose negative training samples from the ranked list produced by the current prediction model and iteratively update our model. The experiments conducted on three large-scale datasets show that our approach not only reduces the training time, but also leads to significant performance gains.