Collaborative filtering with collective training

  • Authors:
  • Yong Ge;Hui Xiong;Alexander Tuzhilin;Qi Liu

  • Affiliations:
  • Rutgers University, Newark, NJ, USA;Rutgers University, Newark, NJ, USA;Leonard N. Stern School of Business, NYU, New York City, NY, USA;University of Science and Technology of China, Hefei, China

  • Venue:
  • Proceedings of the fifth ACM conference on Recommender systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Rating sparsity is a critical issue for collaborative filtering. For example, the well-known Netflix Movie rating data contain ratings of only about 1% user-item pairs. One way to address this rating sparsity problem is to develop more effective methods for training rating prediction models. To this end, in this paper, we introduce a collective training paradigm to automatically and effectively augment the training ratings. Essentially, the collective training paradigm builds multiple different Collaborative Filtering (CF) models separately, and augments the training ratings of each CF model by using the partial predictions of other CF models for unknown ratings. Along this line, we develop two algorithms, Bi-CF and Tri-CF, based on collective training. For Bi-CF and Tri-CF, we collectively and iteratively train two and three different CF models via iteratively augmenting training ratings for individual CF model. We also design different criteria to guide the selection of augmented training ratings for Bi-CF and Tri-CF. Finally, the experimental results show that Bi-CF and Tri-CF algorithms can significantly outperform baseline methods, such as neighborhood-based and SVD-based models.