Sparse Learning-to-Rank via an Efficient Primal-Dual Algorithm

  • Authors:
  • Hanjiang Lai;Yan Pan;Cong Liu;Liang Lin;Jie Wu

  • Affiliations:
  • Sun Yat-sen University, Guangzhou;Sun Yat-sen University, Guangzhou;Sun Yat-sen University, Guangzhou;Sun Yat-sen University, Guangzhou;Temple University, Philadelphia

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 2013

Quantified Score

Hi-index 14.98

Visualization

Abstract

Learning-to-rank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learning-to-rank, where the learned ranking models are constrained to be with only a few nonzero coefficients. We begin by formulating the sparse learning-to-rank problem as a convex optimization problem with a sparse-inducing $(\ell_1)$ constraint. Since the $(\ell_1)$ constraint is nondifferentiable, the critical issue arising here is how to efficiently solve the optimization problem. To address this issue, we propose a learning algorithm from the primal dual perspective. Furthermore, we prove that, after at most $(O({1\over \epsilon } ))$ iterations, the proposed algorithm can guarantee the obtainment of an $(\epsilon)$-accurate solution. This convergence rate is better than that of the popular subgradient descent algorithm. i.e., $(O({1\over \epsilon^2} ))$. Empirical evaluation on several public benchmark data sets demonstrates the effectiveness of the proposed algorithm: 1) Compared to the methods that learn dense models, learning a ranking model with sparsity constraints significantly improves the ranking accuracies. 2) Compared to other methods for sparse learning-to-rank, the proposed algorithm tends to obtain sparser models and has superior performance gain on both ranking accuracies and training time. 3) Compared to several state-of-the-art algorithms, the ranking accuracies of the proposed algorithm are very competitive and stable.