IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Improving web search ranking by incorporating user behavior information
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search
ACM Transactions on Information Systems (TOIS)
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
SoftRank: optimizing non-smooth rank metrics
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Structured learning for non-smooth ranking losses
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Are click-through data adequate for learning web search rankings?
Proceedings of the 17th ACM conference on Information and knowledge management
Click chain model in web search
Proceedings of the 18th international conference on World wide web
BoltzRank: learning to maximize expected ranking gain
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
On the local optimality of LambdaRank
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Expected reciprocal rank for graded relevance
Proceedings of the 18th ACM conference on Information and knowledge management
Adapting boosting for information retrieval measures
Information Retrieval
Click shaping to optimize multiple objectives
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning to rank with multi-aspect relevance for vertical search
Proceedings of the fifth ACM international conference on Web search and data mining
Multi-objective ranking of comments on web
Proceedings of the 21st international conference on World Wide Web
Joint relevance and freshness learning from clickthroughs for news search
Proceedings of the 21st international conference on World Wide Web
Personalized click shaping through lagrangian duality for online recommendation
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Robust ranking models via risk-sensitive optimization
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Multiple objective optimization in recommender systems
Proceedings of the sixth ACM conference on Recommender systems
Reordering an index to speed query processing without loss of effectiveness
Proceedings of the Seventeenth Australasian Document Computing Symposium
Content recommendation on web portals
Communications of the ACM
Quality-biased ranking for queries with commercial intent
Proceedings of the 22nd international conference on World Wide Web companion
Hi-index | 0.02 |
We investigate the problem of learning to rank with document retrieval from the perspective of learning for multiple objective functions. We present solutions to two open problems in learning to rank: first, we show how multiple measures can be combined into a single graded measure that can be learned. This solves the problem of learning from a 'scorecard' of measures by making such scorecards comparable, and we show results where a standard web relevance measure (NDCG) is used for the top-tier measure, and a relevance measure derived from click data is used for the second-tier measure; the second-tier measure is shown to significantly improve while leaving the top-tier measure largely unchanged. Second, we note that the learning-to-rank problem can itself be viewed as changing as the ranking model learns: for example, early in learning, adjusting the rank of all documents can be advantageous, but later during training, it becomes more desirable to concentrate on correcting the top few documents for each query. We show how an analysis of these problems leads to an improved, iteration-dependent cost function that interpolates between a cost function that is more appropriate for early learning, with one that is more appropriate for late-stage learning. The approach results in a significant improvement in accuracy with the same size models. We investigate these ideas using LambdaMART, a state-of-the-art ranking algorithm.