Tendency correlation analysis for direct optimization of evaluation measures in information retrieval

  • Authors:
  • Yin He;Tie-Yan Liu

  • Affiliations:
  • School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, People's Republic of China;Microsoft Research Asia, 4F, Sigma Center, Beijing, People's Republic of China

  • Venue:
  • Information Retrieval
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Direct optimization of evaluation measures has become an important branch of learning to rank for information retrieval (IR). Since IR evaluation measures are difficult to optimize due to their non-continuity and non-differentiability, most direct optimization methods optimize some surrogate functions instead, which we call surrogate measures. A critical issue regarding these methods is whether the optimization of the surrogate measures can really lead to the optimization of the original IR evaluation measures. In this work, we perform formal analysis on this issue. We propose a concept named "tendency correlation" to describe the relationship between a surrogate measure and its corresponding IR evaluation measure. We show that when a surrogate measure has arbitrarily strong tendency correlation with an IR evaluation measure, the optimization of it will lead to the effective optimization of the original IR evaluation measure. Then, we analyze the tendency correlations of the surrogate measures optimized in a number of direct optimization methods. We prove that the surrogate measures in SoftRank and ApproxRank can have arbitrarily strong tendency correlation with the original IR evaluation measures, regardless of the data distribution, when some parameters are appropriately set. However, the surrogate measures in SVM MAP , DORM NDCG , PermuRank MAP , and SVM NDCG cannot have arbitrarily strong tendency correlation with the original IR evaluation measures on certain distributions of data. Therefore SoftRank and ApproxRank are theoretically sounder than SVM MAP , DORM NDCG , PermuRank MAP , and SVM NDCG , and are expected to result in better ranking performances. Our theoretical findings can explain the experimental results observed on public benchmark datasets.