Confidence-based stopping criteria for active learning for data annotation

  • Authors:
  • Jingbo Zhu;Huizhen Wang;Eduard Hovy;Matthew Ma

  • Affiliations:
  • Northeastern University, China;Northeastern University, China;University of Southern California, Marina del Rey, CA;Scientific Works, Princeton Junction, NJ

  • Venue:
  • ACM Transactions on Speech and Language Processing (TSLP)
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The labor-intensive task of labeling data is a serious bottleneck for many supervised learning approaches for natural language processing applications. Active learning aims to reduce the human labeling cost for supervised learning methods. Determining when to stop the active learning process is a very important practical issue in real-world applications. This article addresses the stopping criterion issue of active learning, and presents four simple stopping criteria based on confidence estimation over the unlabeled data pool, including maximum uncertainty, overall uncertainty, selected accuracy, and minimum expected error methods. Further, to obtain a proper threshold for a stopping criterion in a specific task, this article presents a strategy by considering the label change factor to dynamically update the predefined threshold of a stopping criterion during the active learning process. To empirically analyze the effectiveness of each stopping criterion for active learning, we design several comparison experiments on seven real-world datasets for three representative natural language processing applications such as word sense disambiguation, text classification and opinion analysis.