Performance issues and error analysis in an open-domain question answering system
ACM Transactions on Information Systems (TOIS)
Overview of the CLEF 2006 multilingual question answering track
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
LCC's poweranswer at QA@CLEF 2006
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Applying wikipedia's multilingual knowledge to cross-lingual question answering
NLDB'07 Proceedings of the 12th international conference on Applications of Natural Language to Information Systems
Hi-index | 0.00 |
This paper presents a study of the negative effect of Machine Translation (MT) on the precision of Cross---Lingual Question Answering (CL---QA). For this research, a English---Spanish Question Answering (QA) system is used. Also, the sets of 200 official questions from CLEF 2004 and 2006 are used. The CL experimental evaluation using MT reveals that the precision of the system drops around 30% with regard to the monolingual Spanish task. Our main contribution consists on a taxonomy of the identified errors caused by using MT and how the errors can be overcome by using our proposals. An experimental evaluation proves that our approach performs better than MT tools, at the same time contributing to this CL---QA system being ranked first at English---Spanish QA CLEF 2006.