Crowdsourcing micro-level multimedia annotations: the challenges of evaluation and interface

  • Authors:
  • Sunghyun Park;Gelareh Mohammadi;Ron Artstein;Louis-Philippe Morency

  • Affiliations:
  • University of Southern California, Los Angeles, CA, USA;École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland;University of Southern California, Los Angeles, CA, USA;University of Southern California, Los Angeles, CA, USA

  • Venue:
  • Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new evaluation procedure and tool for crowdsourcing micro-level multimedia annotations and shows that such annotations can achieve a quality comparable to that of expert annotations. We propose a new evaluation procedure, called MM-Eval (Micro-level Multimedia Evaluation), which compares fine time-aligned annotations using Krippendorff's alpha metric and introduce two new metrics to evaluate the types of disagreement between coders. We also introduce OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient multimedia behavior annotations, directly from Amazon Mechanical Turk interface. With an experiment using the above tool and evaluation procedure, we show that a majority vote among annotations from 3 crowdsource workers leads to a quality comparable to that of local expert annotations.