Combining the evidence of multiple query representations for information retrieval
TREC-2 Proceedings of the second conference on Text retrieval conference
Analyses of multiple evidence combination
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
High performance question/answering
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Fusion Via a Linear Combination of Scores
Information Retrieval
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Is it the right answer?: exploiting web redundancy for Answer Validation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
In question answering, two heads are better than one
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Question classification with log-linear models
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Methods for using textual entailment in open-domain question answering
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Testing the Reasoning for Question Answering Validation
Journal of Logic and Computation
CLEF2006 Question Answering Experiments at Tokyo Institute of Technology
Evaluation of Multilingual and Multi-modal Information Retrieval
University of Alicante at QA@CLEF2006: Answer Validation Exercise
Evaluation of Multilingual and Multi-modal Information Retrieval
Improving question answering by combining multiple systems via answer validation
CICLing'08 Proceedings of the 9th international conference on Computational linguistics and intelligent text processing
Overview of the Clef 2008 multilingual question answering track
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Overview of the answer validation exercise 2008
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Logical validation, answer merging and witness selection a study in multi-stream question answering
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Recognizing textual entailment: is word similarity enough?
MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
SPARTE, a test suite for recognising textual entailment in spanish
CICLing'06 Proceedings of the 7th international conference on Computational Linguistics and Intelligent Text Processing
Overview of the CLEF 2006 multilingual question answering track
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Overview of the answer validation exercise 2006
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Re-ranking passages with LSA in a question answering system
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Priberam's question answering system in a cross-language environment
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
LCC's poweranswer at QA@CLEF 2006
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
N-gram vs. keyword-based passage retrieval for question answering
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Using machine learning and text mining in question answering
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Applying dependency trees and term density for answer selection reinforcement
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Monolingual and cross-lingual QA using AliQAn and BRILI systems for CLEF 2006
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
The effect of entity recognition on answer validation
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Hi-index | 0.01 |
Question answering (QA) is the task of automatically answering a question posed in natural language. Currently, there exists several QA approaches, and, according to recent evaluation results, most of them are complementary. That is, different systems are relevant for different kinds of questions. Somehow, this fact indicates that a pertinent combination of various systems should allow to improve the individual results. This paper focuses on this problem, namely, the selection of the correct answer from a given set of responses corresponding to different QA systems. In particular, it proposes a supervised multi-stream approach that decides about the correctness of answers based on a set of features that describe: (i) the compatibility between question and answer types, (ii) the redundancy of answers across streams, as well as (iii) the overlap and non-overlap information between the question-answer pair and the support text. Experimental results are encouraging; evaluated over a set of 190 questions in Spanish and using answers from 17 different QA systems, our multi-stream QA approach could reach an estimated QA performance of 0.74, significantly outperforming the estimated performance from the best individual system (0.53) as well as the result from best traditional multi-stream QA approach (0.60).