Sample selection for MCMC-based recommender systems

  • Authors:
  • Thierry Silbermann;Immanuel Bayer;Steffen Rendle

  • Affiliations:
  • University of Konstanz, Konstanz, Germany;University of Konstanz, Konstanz, Germany;University of Konstanz, Konstanz, Germany

  • Venue:
  • Proceedings of the 7th ACM conference on Recommender systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bayesian Inference with Markov Chain Monte Carlo (MCMC) has been shown to provide high prediction quality in recommender systems. The advantage over learning methods such as coordinate descent/alternating least-squares (ALS) or (stochastic) gradient descent (SGD) is that MCMC takes uncertainty into account and moreover MCMC can easily integrate priors to learn regularization values. For factorization models, MCMC inference can be done with efficient Gibbs samplers. However, MCMC algorithms are not point estimators, but they generate a chain of models. The whole chain of models is used to calculate predictions. For large scale models like factorization methods with millions or billions of model parameters, saving the whole chain of models is very storage intensive and can even get infeasible in practice. In this paper, we address this problem and show how a small subset from the chain of models can approximate the predictive distribution well. We use the fact that models from the chain are correlated and propose online selection techniques to store only a small subset of the models. We perform an empirical analysis on the large scale Netflix dataset with several Bayesian factorization models, including matrix factorization and SVD++. We show that the proposed selection techniques approximate the predictions well with only a small subset of model samples.