Evaluation methodology and metrics employed to assess the TRANSTAC two-way, speech-to-speech translation systems

  • Authors:
  • Gregory A. Sanders;Brian A. Weiss;Craig Schlenoff;Michelle P. Steves;Sherri Condon

  • Affiliations:
  • National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA;National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA;National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA;National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA;The MITRE Corporation, 7515 Colshire Drive, Mailstop H305, McLean, VA 22102, USA

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the most difficult challenges that military personnel face when operating in foreign countries is clear and successful communication with the local population. To address this issue, the Defense Advanced Research Projects Agency (DARPA) is funding academic institutions and industrial organizations through the Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) program to develop practical machine translation systems. The goal of the TRANSTAC program is to demonstrate capabilities to rapidly develop and field free-form, two-way, speech-to-speech translation systems that enable speakers of different languages to communicate with one another in real-world tactical situations without an interpreter. Evaluations of these technologies are a significant part of the program and DARPA has asked the National Institute of Standards and Technology (NIST) to lead this effort. This article presents the experimental design of the TRANSTAC evaluations and the metrics, both quantitative and qualitative, that were used to comprehensively assess the systems' performance.