A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
Query Learning Strategies Using Boosting and Bagging
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Query Learning with Large Margin Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Employing EM and Pool-Based Active Learning for Text Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
SVM selective sampling for ranking with application to data retrieval
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Optimizing estimated loss reduction for active sampling in rank learning
Proceedings of the 25th international conference on Machine learning
Listwise approach to learning to rank: theory and algorithm
Proceedings of the 25th international conference on Machine learning
Document selection methodologies for efficient and effective learning-to-rank
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Deep versus shallow judgments in learning to rank
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Query sampling for ranking learning in web search
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Subset ranking using regression
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Relevant knowledge helps in choosing right teacher: active query selection for ranking adaptation
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Predicting web searcher satisfaction with existing community-based answers
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Rule-based active sampling for learning to rank
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Interactive regret minimization
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
Active associative sampling for author name disambiguation
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
Active query selection for learning rankers
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Selecting classification algorithms with active testing
MLDM'12 Proceedings of the 8th international conference on Machine Learning and Data Mining in Pattern Recognition
Variance maximization via noise injection for active sampling in learning to rank
Proceedings of the 21st ACM international conference on Information and knowledge management
Active evaluation of ranking functions based on graded relevance
Machine Learning
Active evaluation of ranking functions based on graded relevance (extended abstract)
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
The whens and hows of learning to rank for web search
Information Retrieval
Hi-index | 0.00 |
Learning to rank arises in many information retrieval applications, ranging from Web search engine, online advertising to recommendation system. In learning to rank, the performance of a ranking model is strongly affected by the number of labeled examples in the training set; on the other hand, obtaining labeled examples for training data is very expensive and time-consuming. This presents a great need for the active learning approaches to select most informative examples for ranking learning; however, in the literature there is still very limited work to address active learning for ranking. In this paper, we propose a general active learning framework, Expected Loss Optimization (ELO), for ranking. The ELO framework is applicable to a wide range of ranking functions. Under this framework, we derive a novel algorithm, Expected DCG Loss Optimization (ELO-DCG), to select most informative examples. Furthermore, we investigate both query and document level active learning for raking and propose a two-stage ELO-DCG algorithm which incorporate both query and document selection into active learning. Extensive experiments on real-world Web search data sets have demonstrated great potential and effective-ness of the proposed framework and algorithms.