Assessing agreement on classification tasks: the kappa statistic
Computational Linguistics
Automatic ToBI prediction and alignment to speed manual labeling of prosody
Speech Communication - Special issue on speech annotation and corpus tools
The Handbook of Research Synthesis
The Handbook of Research Synthesis
An annotation scheme for discourse-level argumentation in research articles
EACL '99 Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics
95% Replicability for manual word sense tagging
EACL '99 Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics
The kappa statistic: a second look
Computational Linguistics
Development and use of a gold-standard data set for subjectivity classifications
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Evaluating Discourse and Dialogue Coding Schemes
Computational Linguistics
Word sense disambiguation: A survey
ACM Computing Surveys (CSUR)
Inter-coder agreement for computational linguistics
Computational Linguistics
Shallow discourse structure for action item detection
ACTS '09 Proceedings of the HLT-NAACL 2006 Workshop on Analyzing Conversations in Text and Speech
A framework for annotating information structure in discourse
CorpusAnno '05 Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky
HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
Polysemy in verbs: systematic relations between senses and their effect on annotation
HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Making sense of word sense variation
DEW '09 Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions
An empirical approach to temporal reference resolution
Journal of Artificial Intelligence Research
Content analysis: What are they talking about?
Computers & Education - Methodological issue in researching CSCL
From annotator agreement to noise models
Computational Linguistics
Hi-index | 0.00 |
Recent discussions of annotator agreement have mostly centered around its calculation and interpretation, and the correct choice of indices. Although these discussions are important, they only consider the "back-end" of the story, namely, what to do once the data are collected. Just as important in our opinion is to know how agreement is reached in the first place and what factors influence coder agreement as part of the annotation process or setting, as this knowledge can provide concrete guidelines for the planning and set-up of annotation projects. To investigate whether there are factors that consistently impact annotator agreement we conducted a meta-analytic investigation of annotation studies reporting agreement percentages. Our meta-analysis synthesized factors reported in 96 annotation studies from three domains (word-sense disambiguation, prosodic transcriptions, and phonetic transcriptions) and was based on a total of 346 agreement indices. Our analysis identified seven factors that influence reported agreement values: annotation domain, number of categories in a coding scheme, number of annotators in a project, whether annotators received training, the intensity of annotator training, the annotation purpose, and the method used for the calculation of percentage agreements. Based on our results we develop practical recommendations for the assessment, interpretation, calculation, and reporting of coder agreement. We also briefly discuss theoretical implications for the concept of annotation quality.