Assessing agreement on classification tasks: the kappa statistic
Computational Linguistics
Intention-based segmentation: human reliability and correlation with linguistic cues
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
Disambiguating cue phrases in text and speech
COLING '90 Proceedings of the 13th conference on Computational linguistics - Volume 2
A TAXONOMY OF STRATEGIES FOR MULTIMODAL PERSUASIVE MESSAGE GENERATION
Applied Artificial Intelligence
Reliability measurement without limits
Computational Linguistics
Inter-coder agreement for computational linguistics
Computational Linguistics
Multi-track annotation of child language and gestures
Multimodal corpora
On the contextual analysis of agreement scores
Multimodal corpora
REX-J: Japanese referring expression corpus of situated dialogs
Language Resources and Evaluation
Hi-index | 0.00 |
Many multimodal corpora have been collected and annotated in the last years. Unfortunately, in many cases most of the multimodal coding schemes have been shown not to be reliable. This poor reliability may be caused either by the nature of multimodal data or by the nature of statistic methods to assess reliability. In this paper we will review the statistical measures currently used to assess agreement on multimodal corpora annotation. We will also propose alternative statistical methods to the well known kappa statistics.