Analyzing disagreements

  • Authors:
  • Beata Beigman Klebanov;Eyal Beigman;Daniel Diermeier

  • Affiliations:
  • Northwestern University;Northwestern University;Northwestern University

  • Venue:
  • HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of distinguishing between two sources of disagreement in annotations: genuine subjectivity and slip of attention. The latter is especially likely when the classification task has a default class, as in tasks where annotators need to find instances of the phenomenon of interest, such as in a metaphor detection task discussed here. We apply and extend a data analysis technique proposed by Beigman Klebanov and Shamir (2006) to first distill reliably deliberate (non-chance) annotations and then to estimate the amount of attention slips vs genuine disagreement in the reliably deliberate annotations.