The TREC question answering track
Natural Language Engineering
Evaluating the evaluation: a case study using the TREC 2002 question answering track
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
Will pyramids built of nuggets topple over?
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Performance confidence estimation for automatic summarization
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Overview of the CLEF 2004 multilingual question answering track
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Overview of the INEX 2010 question answering track (QA@INEX)
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
Utilizing sub-topical structure of documents for information retrieval
Proceedings of the 4th workshop on Workshop for Ph.D. students in information & knowledge management
Hi-index | 0.00 |
QA@INEX aims to evaluate a complex question-answering task. In such a task, the set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents. Question-answering, XML/passage retrieval and automatic summarization are combined in order to get closer to real information needs. This paper presents the groundwork carried out in 2009 to determine the tasks and a novel evaluation methodology that will be used in 2010.