Content-based multimedia information retrieval: State of the art and challenges
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Data quality from crowdsourcing: a study of annotation selection criteria
HLT '09 Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Proceedings of the international conference on Multimedia information retrieval
A survey on vision-based human action recognition
Image and Vision Computing
Are your participants gaming the system?: screening mechanical turk workers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Cheap, fast and good enough: automatic speech recognition with non-expert transcription
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Consensus versus expertise: a case study of word alignment with Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Collecting image annotations using Amazon's Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Efficiently scaling up video annotation with crowdsourced marketplaces
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Video summarization via crowdsourcing
CHI '11 Extended Abstracts on Human Factors in Computing Systems
A survey of semantic image and video annotation tools
Knowledge-driven multimedia information extraction and ontology evolution
Guess what? a game for affective annotation of video using crowd sourcing
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Towards building a virtual counselor: modeling nonverbal behavior during intimate self-disclosure
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Proceedings of the 15th ACM on International conference on multimodal interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.00 |
This paper presents a new evaluation procedure and tool for crowdsourcing micro-level multimedia annotations and shows that such annotations can achieve a quality comparable to that of expert annotations. We propose a new evaluation procedure, called MM-Eval (Micro-level Multimedia Evaluation), which compares fine time-aligned annotations using Krippendorff's alpha metric and introduce two new metrics to evaluate the types of disagreement between coders. We also introduce OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient multimedia behavior annotations, directly from Amazon Mechanical Turk interface. With an experiment using the above tool and evaluation procedure, we show that a majority vote among annotations from 3 crowdsource workers leads to a quality comparable to that of local expert annotations.