Computer Evaluation of Indexing and Text Processing
Journal of the ACM (JACM)
Document language models, query models, and risk minimization for information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
ACM SIGIR Forum
Query type classification for web document retrieval
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Building bridges for web query classification
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Incorporating query difference for learning retrieval functions in information retrieval
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
A regression framework for learning ranking functions using relative relevance judgments
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Varying approaches to topical web query classification
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Query dependent ranking using K-nearest neighbor
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Optimizing unified loss for web ranking specialization
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Learning to rank for freshness and relevance
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Proceedings of the 20th ACM international conference on Information and knowledge management
Multi-objective ranking of comments on web
Proceedings of the 21st international conference on World Wide Web
Query-dependent rank aggregation with local models
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
Robust ranking models via risk-sensitive optimization
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
The Impacts of Structural Difference and Temporality of Tweets on Retrieval Effectiveness
ACM Transactions on Information Systems (TOIS)
User modeling in search logs via a nonparametric bayesian approach
Proceedings of the 7th ACM international conference on Web search and data mining
Hi-index | 0.00 |
Many ranking algorithms applying machine learning techniques have been proposed in informational retrieval and Web search. However, most of existing approaches do not explicitly take into account the fact that queries vary significantly in terms of ranking and entail different treatments regarding the ranking models. In this paper, we apply a divide-and-conquer framework for ranking specialization, i.e. learning multiple ranking models by addressing query difference. We first generate query representation by aggregating ranking features through pseudo feedbacks, and employ unsupervised clustering methods to identify a set of ranking-sensitive query topics based on training queries. To learn multiple ranking models for respective ranking-sensitive query topics, we define a global loss function by combining the ranking risks of all query topics, and we propose a unified SVM-based learning process to minimize the global loss. Moreover, we employ an ensemble approach to generate the ranking result for each test query by applying a set of ranking models of the most appropriate query topics. We conduct experiments using a benchmark dataset for learning ranking functions as well as a dataset from a commercial search engine. Experimental results show that our proposed approach can significantly improve the ranking performance over existing single-model approaches as well as straightforward local ranking approaches, and the automatically identified ranking-sensitive topics are more useful for enhancing ranking performance than pre-defined query categorization.