A Bayesian approach toward active learning for collaborative filtering
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
IEEE Transactions on Knowledge and Data Engineering
Tri-Training: Exploiting Unlabeled Data Using Three Classifiers
IEEE Transactions on Knowledge and Data Engineering
Collective multi-label classification
Proceedings of the 14th ACM international conference on Information and knowledge management
A recursive prediction algorithm for collaborative filtering recommender systems
Proceedings of the 2007 ACM conference on Recommender systems
Personalized active learning for collaborative filtering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
Exploiting user interests for collaborative filtering: interests expansion via personalized ranking
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Cost-aware travel tour recommendation
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Integrating collaborative filtering and matching-based search for product recommendations
Journal of Theoretical and Applied Electronic Commerce Research
Cost-Aware Collaborative Filtering for Travel Tour Recommendations
ACM Transactions on Information Systems (TOIS)
Hi-index | 0.00 |
Rating sparsity is a critical issue for collaborative filtering. For example, the well-known Netflix Movie rating data contain ratings of only about 1% user-item pairs. One way to address this rating sparsity problem is to develop more effective methods for training rating prediction models. To this end, in this paper, we introduce a collective training paradigm to automatically and effectively augment the training ratings. Essentially, the collective training paradigm builds multiple different Collaborative Filtering (CF) models separately, and augments the training ratings of each CF model by using the partial predictions of other CF models for unknown ratings. Along this line, we develop two algorithms, Bi-CF and Tri-CF, based on collective training. For Bi-CF and Tri-CF, we collectively and iteratively train two and three different CF models via iteratively augmenting training ratings for individual CF model. We also design different criteria to guide the selection of augmented training ratings for Bi-CF and Tri-CF. Finally, the experimental results show that Bi-CF and Tri-CF algorithms can significantly outperform baseline methods, such as neighborhood-based and SVD-based models.