Learning with an unreliable teacher
Pattern Recognition
A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Some empirical evidence for annotation noise in a benchmarked dataset
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
The Journal of Machine Learning Research
Using Crowdsourcing and Active Learning to Track Sentiment in Online Media
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Regression Learning with Multiple Noisy Oracles
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Learning from multiple annotators with Gaussian processes
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Do they belong to the same class: active learning by querying pairwise label homogeneity
Proceedings of the 20th ACM international conference on Information and knowledge management
Towards learning spiculation score of the masses in mammography images
IWDM'10 Proceedings of the 10th international conference on Digital Mammography
Eliminating spammers and ranking annotators for crowdsourced labeling tasks
The Journal of Machine Learning Research
CrowdScreen: algorithms for filtering data with humans
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
Active learning with c-certainty
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
Multiplicity and word sense: evaluating and learning from multiply labeled word sense annotations
Language Resources and Evaluation
BATC: a benchmark for aggregation techniques in crowdsourcing
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Evaluating the crowd with confidence
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning from multiple annotators: Distinguishing good from random labelers
Pattern Recognition Letters
A lightweight combinatorial approach for inferring the ground truth from multiple annotators
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Semi-supervised learning for integration of aerosol predictions from multiple satellite instruments
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Visual tracking via weakly supervised learning from multiple imperfect oracles
Pattern Recognition
Repeated labeling using multiple noisy labelers
Data Mining and Knowledge Discovery
Multimedia Tools and Applications
Hi-index | 0.00 |
We describe a probabilistic approach for supervised learning when we have multiple experts/annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline.