Inter-coder agreement for computational linguistics
Computational Linguistics
The reliability of anaphoric annotation, reconsidered: taking ambiguity into account
CorpusAnno '05 Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky
HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
How useful are your comments?: analyzing and predicting youtube comments and comment ratings
Proceedings of the 19th international conference on World wide web
Detection of spam tipping behaviour on foursquare
Proceedings of the 22nd international conference on World Wide Web companion
Software testing with an operational profile: OP definition
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
This paper describes a method for evaluating interannotator reliability in an email corpus annotated for type (e.g., question, answer, social chat) when annotators are allowed to assign multiple labels to a message. An augmentation is proposed to Cohen's kappa statistic which permits all data to be included in the reliability measure and which further permits the identification of more or less reliably annotated data points.