Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Collecting high quality overlapping labels at low cost
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Learning to rank from a noisy crowd
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Learning to rank from a noisy crowd
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Hi-index | 0.00 |
We study how to best use crowdsourced relevance judgments learning to rank [1, 7]. We integrate two lines of prior work: unreliable crowd-based binary annotation for binary classification [5, 3], and aggregating graded relevance judgments from reliable experts for ranking [7]. To model varying performance of the crowd, we simulate annotation noise with varying magnitude and distributional properties. Evaluation on three LETOR test collections reveals a striking trend contrary to prior studies: single labeling outperforms consensus methods in maximizing learner accuracy relative to annotator eýort. We also see surprising consistency of the learning curve across noise distributions, as well as greater challenge with the adversarial case for multi-class labeling.