Improving Maximum Margin Matrix Factorization

  • Authors:
  • Markus Weimer;Alexandros Karatzoglou;Alex Smola

  • Affiliations:
  • Technische Universität Darmstadt, Germany;INSA de Rouen, LITIS, France;NICTA, Canberra, Australia 2601

  • Venue:
  • ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Maximum Margin Matrix Factorization (MMMF) has been proposed as a learning approach to the task of collaborative filtering with promising results. In our recent paper [2], we proposed to extend the general MMMF framework to allow for structured (ranking) losses in addition to the squared error loss.In this paper, we introduce a novel algorithm to compute the ordinal regression ranking loss which is significantly faster than the state of the art. In addition, we propose severals extensions to the MMMF model: We introduce offset terms to cater for user and item biases. Users exhibit vastly different rating frequencies ranging from only one rating per user to thousands of them. Similarly, some items get thousands of ratings while others get rated only once. We introduce an adaptive regularizer to allow for more complex models for those items and users with many ratings. Finally, we show equivalence between a recent extension introduced in and a graph kernel approach described in [3]. Both aim at providing meaningful predictions for users with very little training data by virtue of the recommender graph.We performed an evaluation of these extensions on two standard data sets: Eachmovie and Movielens. These experiments show that the introduced extensions do improve the predictive performance over the original MMMF formulation, even though we did not formally optimize the parameters.