An agreement measure for determining inter-annotator reliability of human judgements on affective text

  • Authors:
  • Plaban Kr. Bhowmick;Pabitra Mitra;Anupam Basu

  • Affiliations:
  • Indian Institute of Technology, Kharagpur, India;Indian Institute of Technology, Kharagpur, India;Indian Institute of Technology, Kharagpur, India

  • Venue:
  • HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

An affective text may be judged to belong to multiple affect categories as it may evoke different affects with varying degree of intensity. For affect classification of text, it is often required to annotate text corpus with affect categories. This task is often performed by a number of human judges. This paper presents a new agreement measure inspired by Kappa coefficient to compute inter-annotator reliability when the annotators have freedom to categorize a text into more than one class. The extended reliability coefficient has been applied to measure the quality of an affective text corpus. An analysis of the factors that influence corpus quality has been provided.