Bridging the lexical chasm: statistical approaches to answer-finding
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Probabilistic question answering on the web
Proceedings of the 11th international conference on World Wide Web
Question answering from the web using knowledge annotation and knowledge mining techniques
CIKM '03 Proceedings of the twelfth international conference on Information and knowledge management
A noisy-channel approach to question answering
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
A Statistical Classification Approach to Question Answering using Web Data
CW '05 Proceedings of the 2005 International Conference on Cyberworlds
An analysis of the AskMSR question-answering system
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Statistical QA - classifier vs. re-ranker: what's the difference?
MultiSumQA '03 Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering - Volume 12
Monolingual web-based factoid question answering in Chinese, Swedish, English and Japanese
MLQA '06 Proceedings of the Workshop on Multilingual Question Answering
Hi-index | 0.00 |
In this paper we present the experiments performed at Tokyo Institute of Technology for the CLEF2006 Multiple Language Question Answering (QA@CLEF) track. Our approach to QA centres on a nonlinguistic, data-driven, statistical classification model that uses the redundancy of the web to find correct answers. For the cross-language aspect we employed publicly available web-based text translation tools to translate the question from the source into the corresponding target language, then used the corresponding mono-lingual QA system to find the answers. The hypothesised correct answers were then projected back on to the appropriate closed-domain corpus. Correct and supported answer performance on the mono-lingual tasks was around 14% for both Spanish and French. Performance on the cross-language tasks ranged from 5% for Spanish-English, to 12% for French-Spanish. Our method of projecting answers onto documents was shown not to work well: in the worst case on the French-English task we lost 84% of our otherwise correct answers. Ignoring the need for correct support information the exact answer accuracy increased to 29% and 21% correct on the Spanish and French mono-lingual tasks, respectively.