Overview of the CLEF 2004 multilingual question answering track
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Question answering pilot task at CLEF 2004
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
The key to the first CLEF with portuguese: topics, questions and answers in CHAVE
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Question Answering in Restricted Domains: An Overview
Computational Linguistics
A Multilingual Framework for Searching Definitions on Web Snippets
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence
A Machine Learning Approach for an Indonesian-English Cross Language Question Answering System
IEICE - Transactions on Information and Systems
Enhancing Cross-Language Question Answering by Combining Multiple Question Translations
CICLing '07 Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing
Esfinge: a question answering system in the web using the web
EACL '06 Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters & Demonstrations
Learning of graph-based question answering rules
TextGraphs-1 Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing
Answering questions with an n-gram based passage retrieval engine
Journal of Intelligent Information Systems
Improving question answering by combining multiple systems via answer validation
CICLing'08 Proceedings of the 9th international conference on Computational linguistics and intelligent text processing
Overview of the Clef 2008 multilingual question answering track
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Biomedical question answering: A survey
Computer Methods and Programs in Biomedicine
FIDJI: using syntax for validating answers in multiple documents
Information Retrieval
Question answering for Portuguese: how much is needed?
SBIA'10 Proceedings of the 20th Brazilian conference on Advances in artificial intelligence
Cross-lingual slot filling from comparable corpora
BUCC '11 Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web
A text mining approach for definition question answering
FinTAL'06 Proceedings of the 5th international conference on Advances in Natural Language Processing
The Œdipe system at CLEF-QA 2005
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
An XML-based system for spanish question answering
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
Term translation validation by retrieving bi-terms
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
Priberam’s question answering system for portuguese
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
AliQAn, spanish QA system at CLEF-2005
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
Question answering experiments for finnish and french
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
SPARTE, a test suite for recognising textual entailment in spanish
CICLing'06 Proceedings of the 7th international conference on Computational Linguistics and Intelligent Text Processing
Question answering at the cross-language evaluation forum 2003---2010
Language Resources and Evaluation
Architecture and evaluation of BRUJA, a multilingual question answering system
Information Retrieval
Overview of the CLEF 2006 multilingual question answering track
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Overview of the answer validation exercise 2006
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Priberam's question answering system in a cross-language environment
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
N-gram vs. keyword-based passage retrieval for question answering
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
The effect of entity recognition on answer validation
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Hi-index | 0.00 |
The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise. Overall results showed a general increase in performance in comparison to last year. The best performing monolingual system irrespective of target language answered 64.5% of the questions correctly (in the monolingual Portuguese task), while the average of the best performances for each target language was 42.6%. The cross-language step instead entailed a considerable drop in performance. In addition to accuracy, the organisers also measured the relation between the correctness of an answer and a system’s stated confidence in it, showing that the best systems did not always provide the most reliable confidence score. We provide an overview of the 2005 QA track, detail the procedure followed to build the test sets and present a general analysis of the results.