Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Data quality from crowdsourcing: a study of annotation selection criteria
HLT '09 Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Proceedings of the international conference on Multimedia information retrieval
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
The Journal of Machine Learning Research
Human activity analysis: A review
ACM Computing Surveys (CSUR)
CrowdDB: answering queries with crowdsourcing
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Efficiently Scaling up Crowdsourced Video Annotation
International Journal of Computer Vision
Automatic correction of annotation boundaries in activity datasets by class separation maximization
Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication
Hi-index | 0.00 |
Activity annotation in videos is necessary to create a training dataset for most of activity recognition systems. This is a very time consuming and repetitive task. Crowdsourcing gains popularity to distribute annotation tasks to a large pool of taggers. We present for the first time an approach to achieve good quality for activity annotation in videos through crowdsourcing on the AmazonMechanical Turk platform (AMT). Taggers must annotate the start, end boundaries and the label of all occurrences of activities in videos. Two strategies to detect non-serious taggers according to temporal annotated results are presented. Individual filtering checks the consistence in the answers of each tagger with the characteristic of dataset to identify and remove nonserious taggers. Collaborative filtering checks the agreement in annotations among taggers. The filtering techniques detect and remove non-serious taggers and finally, the majority voting applied to AMT temporal tags to generate one final AMT activity annotation set. We conduct the experiments to get activity annotation from AMT on a subset of two rich datasets frequently used in activity recognition. The results show that our proposed filtering strategies can increase the accuracy by up to 40%. The final annotation set is of comparable quality of the annotation of experts with high accuracy (76% to 92%).