OHSUMED: an interactive retrieval evaluation and new large test collection for research
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
A re-examination of text categorization methods
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Discriminative models for information retrieval
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Adapting ranking SVM to document retrieval
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
A support vector method for optimizing average precision
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Ranking with multiple hyperplanes
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
SoftRank: optimizing non-smooth rank metrics
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Knowledge-based image retrieval system
Knowledge-Based Systems
Listwise approach to learning to rank: theory and algorithm
Proceedings of the 25th international conference on Machine learning
Learning filtering rulesets for ranking refinement in relevance feedback
Knowledge-Based Systems
Aggregating preference ranking with fuzzy Data Envelopment Analysis
Knowledge-Based Systems
A novel image retrieval model based on the most relevant features
Knowledge-Based Systems
A novel two-level nearest neighbor classification algorithm using an adaptive distance metric
Knowledge-Based Systems
Probabilistic outputs for twin support vector machines
Knowledge-Based Systems
Statistical cross-language Web content quality assessment
Knowledge-Based Systems
An adaptive learning to rank algorithm: Learning automata approach
Decision Support Systems
Efficient gradient descent algorithm for sparse models with application in learning-to-rank
Knowledge-Based Systems
Hi-index | 0.00 |
The problem of ''Learning to rank'' is a popular research topic in Information Retrieval (IR) and machine learning communities. Some existing list-wise methods, such as AdaRank, directly use the IR measures as performance functions to quantify how well a ranking function can predict rankings. However, the IR measures only count for the document ranks, but do not consider how well the algorithm predicts the relevance scores of documents. These methods do not make best use of the available prior knowledge and may lead to suboptimal performance. Hence, we conduct research by combining both the document ranks and relevance scores. We propose a novel performance function that encodes the relevance scores. We also define performance functions by combining our proposed one with MAP or NDCG, respectively. The experimental results on the benchmark data collections show that our methods can significantly outperform the state-of-the-art AdaRank baselines.