A unified approach in speech-to-speech translation: integrating features of speech recognition and machine translation

  • Authors:
  • Ruiqiang Zhang;Genichiro Kikui;Hirofumi Yamamoto;Taro Watanabe;Frank Soong;Wai Kit Lo

  • Affiliations:
  • ATR Spoken Language, Translation Research Laboratories, Seiika-cho, Soraku-gun, Kyoto, Japan;ATR Spoken Language, Translation Research Laboratories, Seiika-cho, Soraku-gun, Kyoto, Japan;ATR Spoken Language, Translation Research Laboratories, Seiika-cho, Soraku-gun, Kyoto, Japan;ATR Spoken Language, Translation Research Laboratories, Seiika-cho, Soraku-gun, Kyoto, Japan;ATR Spoken Language, Translation Research Laboratories, Seiika-cho, Soraku-gun, Kyoto, Japan;ATR Spoken Language, Translation Research Laboratories, Seiika-cho, Soraku-gun, Kyoto, Japan

  • Venue:
  • COLING '04 Proceedings of the 20th international conference on Computational Linguistics
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Based upon a statistically trained speech translation system, in this study, we try to combine distinctive features derived from the two modules: speech recognition and statistical machine translation, in a loglinear model. The translation hypotheses are then rescored and translation performance is improved. The standard translation evaluation metrics, including BLEU, NIST, multiple reference word error rate and its position independent counterpart, were optimized to solve the weights of the features in the log-linear model. The experimental results have shown significant improvement over the baseline IBM model 4 in all automatic translation evaluation metrics. The largest was for BLEU, by 7.9% absolute.