Newton's Method for Large Bound-Constrained Optimization Problems
SIAM Journal on Optimization
Generalized Bradley-Terry Models and Multi-Class Probability Estimates
The Journal of Machine Learning Research
Learning to Rank for Information Retrieval
Foundations and Trends in Information Retrieval
Here or there: preference judgments for relevance
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
The Journal of Machine Learning Research
A Bayesian Approximation Method for Online Ranking
The Journal of Machine Learning Research
Rank aggregation via nuclear norm minimization
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
A flexible generative model for preference aggregation
Proceedings of the 21st international conference on World Wide Web
Combining human and machine intelligence in large-scale crowdsourcing
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
A document rating system for preference judgements
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Statistical quality estimation for general crowdsourcing tasks
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Slow Search: Information Retrieval without Time Constraints
Proceedings of the Symposium on Human-Computer Interaction and Information Retrieval
Information extraction and manipulation threats in crowd-powered systems
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Hi-index | 0.00 |
Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by sets of annotators. We propose a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing. In contrast to traditional ranking aggregation methods, the approach learns about and folds into consideration the quality of contributions of each annotator. In addition, we minimize the cost of assessment by introducing a generalization of the traditional active learning scenario to jointly select the annotator and pair to assess while taking into account the annotator quality, the uncertainty over ordering of the pair, and the current model uncertainty. We formalize this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implement it using an efficient online Bayesian updating scheme. Using simulated and real-world data, we demonstrate that the active learning strategy achieves significant reductions in labeling cost while maintaining accuracy.