Learning dependency translation models as collections of finite-state head transducers

  • Authors:
  • Hiyan Alshawi;Shona Douglas;Srinivas Bangalore

  • Affiliations:
  • Shannon Laboratory, AT&T Labs;Shannon Laboratory, AT&T Labs;Shannon Laboratory, AT&T Labs

  • Venue:
  • Computational Linguistics - Special issue on finite-state methods in NLP
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper defines weighted head transducers, finite-state machines that perform middle-out string transduction. These transducers are strictly more expressive than the special case of standard left-to-right finite-state transducers. Dependency transduction models are then defined as collections of weighted head transducers that are applied hierarchically. A dynamic programming search algorithm is described for finding the optimal transduction of an input string with respect to a dependency transduction model. A method for automatically training a dependency transduction model from a set of input-output example strings is presented. The method first searches for hierarchical alignments of the training examples guided by correlation statistics, and then constructs the transitions of head transducers that are consistent with these alignments. Experimental results are given for applying the training method to translation from English to Spanish and Japanese.