Learning to translate: a statistical and computational analysis

  • Authors:
  • Marco Turchi;Tijl De Bie;Cyril Goutte;Nello Cristianini

  • Affiliations:
  • European Commission-Joint Research Centre, IPSC, GlobeSec, Ispra, Italy and Intelligent Systems Laboratory, University of Bristol, Bristol, UK;Intelligent Systems Laboratory, University of Bristol, Bristol, UK;Interactive Language Technologies, National Research Council Canada, Gatineau, QC, Canada;Intelligent Systems Laboratory, University of Bristol, Bristol, UK

  • Venue:
  • Advances in Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an extensive experimental study of Phrase-based Statistical Machine Translation, from the point of view of its learning capabilities. Very accurate Learning Curves are obtained, using high-performance computing, and extrapolations of the projected performance of the system under different conditions are provided. Our experiments confirm existing and mostly unpublished beliefs about the learning capabilities of statistical machine translation systems. We also provide insight into the way statistical machine translation learns from data, including the respective influence of translation and language models, the impact of phrase length on performance, and various unlearning and perturbation analyses. Our results support and illustrate the fact that performance improves by a constant amount for each doubling of the data, across different language pairs, and different systems. This fundamental limitation seems to be a direct consequence of Zipf law governing textual data. Although the rate of improvement may depend on both the data and the estimation method, it is unlikely that the general shape of the learning curve will change withoutmajor changes in the modeling and inference phases. Possible research directions that address this issue include the integration of linguistic rules or the development of active learning procedures.