Distributed asynchronous online learning for natural language processing

  • Authors:
  • Kevin Gimpel;Dipanjan Das;Noah A. Smith

  • Affiliations:
  • Carnegie Mellon Univeristy, Pittsburgh, PA;Carnegie Mellon Univeristy, Pittsburgh, PA;Carnegie Mellon Univeristy, Pittsburgh, PA

  • Venue:
  • CoNLL '10 Proceedings of the Fourteenth Conference on Computational Natural Language Learning
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent speed-ups for training large-scale models like those found in statistical NLP exploit distributed computing (either on multicore or "cloud" architectures) and rapidly converging online learning algorithms. Here we aim to combine the two. We focus on distributed, "mini-batch" learners that make frequent updates asynchronously (Nedic et al., 2001; Langford et al., 2009). We generalize existing asynchronous algorithms and experiment extensively with structured prediction problems from NLP, including discriminative, unsupervised, and non-convex learning scenarios. Our results show asynchronous learning can provide substantial speedups compared to distributed and single-processor mini-batch algorithms with no signs of error arising from the approximate nature of the technique.