Learning with an unreliable teacher
Pattern Recognition
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning
Learning in embedded systems
Noisy information value in utility-based decision making
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
Toward economic machine learning and utility-based data mining
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
An Expected Utility Approach to Active Feature-Value Acquisition
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Proactive learning: cost-sensitive active learning with multiple imperfect oracles
Proceedings of the 17th ACM conference on Information and knowledge management
Active Feature-Value Acquisition
Management Science
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Identifying and eliminating mislabeled training instances
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Learning and classifying under hard budgets
ECML'05 Proceedings of the 16th European conference on Machine Learning
Crowd translator: on building localized speech recognizers through micropayments
ACM SIGOPS Operating Systems Review
Proceedings of the international conference on Multimedia information retrieval
Some empirical evidence for annotation noise in a benchmarked dataset
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
The Journal of Machine Learning Research
Using Crowdsourcing and Active Learning to Track Sentiment in Online Media
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Regression Learning with Multiple Noisy Oracles
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Automatic image semantic interpretation using social action and tagging data
Multimedia Tools and Applications
Online active inference and learning
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Active learning with Amazon Mechanical Turk
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Is Someone in this Office Available to Help Me?
Journal of Intelligent and Robotic Systems
Eliminating spammers and ranking annotators for crowdsourced labeling tasks
The Journal of Machine Learning Research
CrowdScreen: algorithms for filtering data with humans
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
Learning from crowds in the presence of schools of thought
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Active learning with c-certainty
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
iSampling: framework for developing sampling methods considering user's interest
Proceedings of the 21st ACM international conference on Information and knowledge management
Reverse active learning for optimising information extraction training production
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Evaluating the crowd with confidence
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Statistical quality estimation for general crowdsourcing tasks
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
A threshold method for imbalanced multiple noisy labeling
Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Selective sampling and active learning from single and multiple teachers
The Journal of Machine Learning Research
Accurate integration of crowdsourced labels using workers' self-reported confidence scores
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Repeated labeling using multiple noisy labelers
Data Mining and Knowledge Discovery
A new utility-emphasized analysis for stock trading rules
Intelligent Data Analysis
Hi-index | 0.00 |
Many scalable data mining tasks rely on active learning to provide the most useful accurately labeled instances. However, what if there are multiple labeling sources ('oracles' or 'experts') with different but unknown reliabilities? With the recent advent of inexpensive and scalable online annotation tools, such as Amazon's Mechanical Turk, the labeling process has become more vulnerable to noise - and without prior knowledge of the accuracy of each individual labeler. This paper addresses exactly such a challenge: how to jointly learn the accuracy of labeling sources and obtain the most informative labels for the active learning task at hand minimizing total labeling effort. More specifically, we present IEThresh (Interval Estimate Threshold) as a strategy to intelligently select the expert(s) with the highest estimated labeling accuracy. IEThresh estimates a confidence interval for the reliability of each expert and filters out the one(s) whose estimated upper-bound confidence interval is below a threshold - which jointly optimizes expected accuracy (mean) and need to better estimate the expert's accuracy (variance). Our framework is flexible enough to work with a wide range of different noise levels and outperforms baselines such as asking all available experts and random expert selection. In particular, IEThresh achieves a given level of accuracy with less than half the queries issued by all-experts labeling and less than a third the queries required by random expert selection on datasets such as the UCI mushroom one. The results show that our method naturally balances exploration and exploitation as it gains knowledge of which experts to rely upon, and selects them with increasing frequency.