The affect of machine translation on the performance of Arabic-English QA system

  • Authors:
  • Azzah Al-Maskari;Mark Sanderson

  • Affiliations:
  • University of Sheffield, Sheffield, UK;University of Sheffield, Sheffield, UK

  • Venue:
  • MLQA '06 Proceedings of the Workshop on Multilingual Question Answering
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The aim of this paper is to investigate how much the effectiveness of a Question Answering (QA) system was affected by the performance of Machine Translation (MT) based question translation. Nearly 200 questions were selected from TREC QA tracks and ran through a question answering system. It was able to answer 42.6% of the questions correctly in a monolingual run. These questions were then translated manually from English into Arabic and back into English using an MT system, and then re-applied to the QA system. The system was able to answer 10.2% of the translated questions. An analysis of what sort of translation error affected which questions was conducted, concluding that factoid type questions are less prone to translation error than others.