Latent factor models with additive and hierarchically-smoothed user preferences

  • Authors:
  • Amr Ahmed;Bhargav Kanagal;Sandeep Pandey;Vanja Josifovski;Lluis Garcia Pueyo;Jeff Yuan

  • Affiliations:
  • Google Inc., Mountain View, USA;Google Inc., Mountain View, USA;Twitter, San Fransisco, USA;Google Inc., Mountain View, USA;Google inc., Mountain View, USA;Yahoo! Research, Sunnyvale, USA

  • Venue:
  • Proceedings of the sixth ACM international conference on Web search and data mining
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Items in recommender systems are usually associated with annotated attributes: for e.g., brand and price for products; agency for news articles, etc. Such attributes are highly informative and must be exploited for accurate recommendation. While learning a user preference model over these attributes can result in an interpretable recommender system and can hands the cold start problem, it suffers from two major drawbacks: data sparsity and the inability to model random effects. On the other hand, latent-factor collaborative filtering models have shown great promise in recommender systems; however, its performance on rare items is poor. In this paper we propose a novel model LFUM, which provides the advantages of both of the above models. We learn user preferences (over the attributes) using a personalized Bayesian hierarchical model that uses a combination(additive model) of a globally learned preference model along with user-specific preferences. To combat data-sparsity, we smooth these preferences over the item-taxonomy using an efficient forward-filtering and backward-smoothing inference algorithm. Our inference algorithms can handle both discrete attributes (e.g., item brands) and continuous attributes (e.g., item prices). We combine the user preferences with the latent-factor models and train the resulting collaborative filtering system end-to-end using the successful BPR ranking algorithm. In our extensive experimental analysis, we show that our proposed model outperforms several commonly used baselines and we carry out an ablation study showing the benefits of each component of our model.