Making large-scale support vector machine learning practical
Advances in kernel methods
Is it the right answer?: exploiting web redundancy for Answer Validation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
In question answering, two heads are better than one
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
A Markov random field model for term dependencies
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Statistical QA - classifier vs. re-ranker: what's the difference?
MultiSumQA '03 Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering - Volume 12
Incorporating non-local information into information extraction systems by Gibbs sampling
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
A probabilistic graphical model for joint answer ranking in question answering
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Structured retrieval for question answering
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
A graph-based semi-supervised learning for question-answering
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Statistical source expansion for question answering
Proceedings of the 20th ACM international conference on Information and knowledge management
Learning to rank for robust question answering
Proceedings of the 21st ACM international conference on Information and knowledge management
A framework for merging and ranking of answers in DeepQA
IBM Journal of Research and Development
Hi-index | 0.00 |
We describe a general result ranking approach for multi-phase, multi strategy information systems, which has been applied to the task of question answering (QA). Many information systems incorporate multiple steps and each step or phase may incorporate multiple component algorithms to achieve acceptable robustness and overall performance. Such systems may produce and rank a large number of candidate results. Prior work includes many models that rank a particular type of information object (e.g. a retrieved document, a factoid answer) using features specific to that information type, without attempting to make use of other non-local features (e.g. features of the upstream information source). We propose an approach that allows each phase in a system to leverage information propagated from preceding phases to inform the ranking decision. This is accomplished by a system object graph which represents all of the objects created during system execution, object dependencies (e.g. provenance), and ranking feature values extracted for a specific object. We evaluate the effectiveness of the proposed ranking approach in a multi-phase question answering system built by recombining pre-existing software modules. Experimental results show that our proposed approach significantly outperforms comparable answer ranking models.