Learning multiple metrics for ranking

  • Authors:
  • Xiubo Geng;Xue-Qi Cheng

  • Affiliations:
  • Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190

  • Venue:
  • Frontiers of Computer Science in China
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Directly optimizing an information retrieval (IR) metric has become a hot topic in the field of learning to rank. Conventional wisdom believes that it is better to train for the loss function on which will be used for evaluation. But we often observe different results in reality. For example, directly optimizing averaged precision achieves higher performance than directly optimizing precision@3 when the ranking results are evaluated in terms of precision@3. This motivates us to combine multiple metrics in the process of optimizing IR metrics. For simplicity we study learning with two metrics. Since we usually conduct the learning process in a restricted hypothesis space, e.g., linear hypothesis space, it is usually difficult to maximize both metrics at the same time. To tackle this problem, we propose a relaxed approach in this paper. Specifically, we incorporate one metric within the constraint while maximizing the other one. By restricting the feasible hypothesis space, we can get a more robust ranking model. Empirical results on the benchmark data set LETOR show that the relaxed approach is superior to the direct linear combination approach, and also outperforms other baselines.