Applying the learning rate adaptation to the matrix factorization based collaborative filtering

  • Authors:
  • Xin Luo;Yunni Xia;Qingsheng Zhu

  • Affiliations:
  • College of Computer Science, Chongqing University, Chongqing 400044, China and Chongqing Key Laboratory of Software Theory & Technology, Chongqing 400044, China;College of Computer Science, Chongqing University, Chongqing 400044, China and Chongqing Key Laboratory of Software Theory & Technology, Chongqing 400044, China;College of Computer Science, Chongqing University, Chongqing 400044, China and Chongqing Key Laboratory of Software Theory & Technology, Chongqing 400044, China

  • Venue:
  • Knowledge-Based Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Matrix Factorization (MF) based Collaborative Filtering (CF) have proved to be a highly accurate and scalable approach to recommender systems. In MF based CF, the learning rate is a key factor affecting the recommendation accuracy and convergence rate; however, this essential parameter is difficult to decide, since the recommender has to keep the balance between the recommendation accuracy and convergence rate. In this work, we choose the Regularized Matrix Factorization (RMF) based CF as the base model to discuss the effect of the learning rate in MF based CF, trying to deal with the dilemma of learning rate tuning through learning rate adaptation. First of all, we empirically validate the affection caused by the change of the learning rate on the recommendation performance. Subsequently, we integrate three sophisticated learning rate adapting strategies into RMF, including the Deterministic Step Size Adaption (DSSA), the Incremental Delta Bar Delta (IDBD), and the Stochastic Meta Decent (SMD). Thereafter, by analyzing the characteristics of the parameter update in RMF, we further propose the Gradient Cosine Adaption (GCA). The experimental results on five public large datasets demonstrate that by employing GCA, RMF could maintain good balance between accuracy and convergence rate, especially with small learning rate values.