Exploiting 'subjective' annotations

  • Authors:
  • Dennis Reidsma;Rieks op den Akker

  • Affiliations:
  • University of Twente, Enschede, The Netherlands;University of Twente, Enschede, The Netherlands

  • Venue:
  • HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many interesting phenomena in conversation can only be annotated as a subjective task, requiring interpretative judgements from annotators. This leads to data which is annotated with lower levels of agreement not only due to errors in the annotation, but also due to the differences in how annotators interpret conversations. This paper constitutes an attempt to find out how subjective annotations with a low level of agreement can profitably be used for machine learning purposes. We analyse the (dis)agreements between annotators for two different cases in a multimodal annotated corpus and explicitly relate the results to the way machine-learning algorithms perform on the annotated data. Finally we present two new concepts, namely 'subjective entity' classifiers resp. 'consensus objective' classifiers, and give recommendations for using subjective data in machine-learning applications.