MURAX: a robust linguistic approach for question answering using an on-line encyclopedia
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
Document expansion for speech retrieval
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Effective site finding using link anchor information
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Exploiting redundancy in question answering
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
On the MSE robustness of batching estimators
Proceedings of the 33nd conference on Winter simulation
The impact of corpus size on question answering performance
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Overview of the CLEF 2007 Multilingual Question Answering Track
Advances in Multilingual and Multimodal Information Retrieval
Automatic generation of topic pages using query-based aspect models
Proceedings of the 18th ACM conference on Information and knowledge management
Statistical source expansion for question answering
Proceedings of the 20th ACM international conference on Information and knowledge management
Automatic knowledge extraction from documents
IBM Journal of Research and Development
Finding needles in the haystack: search and candidate generation
IBM Journal of Research and Development
Typing candidate answers using type coercion
IBM Journal of Research and Development
Textual evidence gathering and analysis
IBM Journal of Research and Development
Structured data and inference in DeepQA
IBM Journal of Research and Development
Special questions and techniques
IBM Journal of Research and Development
Statistical source expansion for question answering
Proceedings of the 20th ACM international conference on Information and knowledge management
User assistance for complex systems
Proceedings of the 30th ACM international conference on Design of communication
Hypothesis Generation and Testing in Event Profiling for Digital Forensic Investigations
International Journal of Digital Crime and Forensics
Introduction to "This is Watson"
IBM Journal of Research and Development
Finding needles in the haystack: search and candidate generation
IBM Journal of Research and Development
Identifying implicit relationships
IBM Journal of Research and Development
Hi-index | 0.00 |
A key requirement for high-performing question-answering (QA) systems is access to high-quality reference corpora from which answers to questions can be hypothesized and evaluated. However, the topic of source acquisition and engineering has received very little attention so far. This is because most existing systems were developed under organized evaluation efforts that included reference corpora as part of the task specification. The task of answering Jeopardy!™ questions, on the other hand, does not come with such a well-circumscribed set of relevant resources. Therefore, it became part of the IBM Watson™ effort to develop a set of well-defined procedures to acquire high-quality resources that can effectively support a high-performing QA system. To this end, we developed three procedures, i.e., source acquisition, source transformation, and source expansion. Source acquisition is an iterative development process of acquiring new collections to cover salient topics deemed to be gaps in existing resources based on principled error analysis. Source transformation refers to the process in which information is extracted from existing sources, either as a whole or in part, and is represented in a form that the system can most easily use. Finally, source expansion attempts to increase the coverage in the content of each known topic by adding new information as well as lexical and syntactic variations of existing information extracted from external large collections. In this paper, we discuss the methodology that we developed for IBM Watson for performing acquisition, transformation, and expansion of textual resources. We demonstrate the effectiveness of each technique through its impact on candidate recall and on end-to-end QA performance.