Adaptive time windows for real-time crowd captioning

  • Authors:
  • Matthew J. Murphy;Christopher D. Miller;Walter S. Lasecki;Jeffrey P. Bigham

  • Affiliations:
  • University of Rochester, Rochester, USA;University of Rochester, Rochester, USA;University of Rochester, Rochester, USA;University of Rochester, Rochester, USA

  • Venue:
  • CHI '13 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Real-time captioning provides deaf and hard of hearing users with access to live spoken language. The most common source of real-time captions are professional stenographers, but they are expensive (up to $200/hr). Recent work shows that groups of non-experts can collectively caption speech in real-time by directing workers to different portions of the speech and automatically merging the pieces together. This work uses 'one size fits all' segment durations regardless of an individual worker's ability or preferences. In this paper, we explore the effect of adaptively scaling the amount of content presented to each worker based on their past and recent performance. For instance, giving fast typists longer segments and giving workers shorter segments as they fatigue. Studies with 24 remote crowd workers, using ground truth in segment calculations, show that this approach improves average coverage by over 54%, and F1 score (harmonic mean) by over 44%.