WordNet: a lexical database for English
Communications of the ACM
Question-answering by predictive annotation
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
MODULA-A-2: A Software Development Approach
MODULA-A-2: A Software Development Approach
The structure of the merriam-webster pocket dictionary
The structure of the merriam-webster pocket dictionary
Extracting semantic hierarchies from a large on-line dictionary
ACL '85 Proceedings of the 23rd annual meeting on Association for Computational Linguistics
Automatic acquisition of hyponyms from large text corpora
COLING '92 Proceedings of the 14th conference on Computational linguistics - Volume 2
Yago: a core of semantic knowledge
Proceedings of the 16th international conference on World Wide Web
A comparison of statistical significance tests for information retrieval evaluation
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
DBpedia - A crystallization point for the Web of Data
Web Semantics: Science, Services and Agents on the World Wide Web
Answer type validation in question answering systems
RIAO '10 Adaptivity, Personalization and Fusion of Heterogeneous Information
Introduction to "This is Watson"
IBM Journal of Research and Development
Question analysis: how watson reads a clue
IBM Journal of Research and Development
IBM Journal of Research and Development
Automatic knowledge extraction from documents
IBM Journal of Research and Development
Finding needles in the haystack: search and candidate generation
IBM Journal of Research and Development
Textual evidence gathering and analysis
IBM Journal of Research and Development
Relation extraction and scoring in DeepQA
IBM Journal of Research and Development
Structured data and inference in DeepQA
IBM Journal of Research and Development
A framework for merging and ranking of answers in DeepQA
IBM Journal of Research and Development
A comparison of hard filters and soft evidence for answer typing in watson
ISWC'12 Proceedings of the 11th international conference on The Semantic Web - Volume Part II
An extensible language interfacefor robot manipulation
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Introduction to "This is Watson"
IBM Journal of Research and Development
Question analysis: how watson reads a clue
IBM Journal of Research and Development
IBM Journal of Research and Development
Textual resource acquisition and engineering
IBM Journal of Research and Development
Automatic knowledge extraction from documents
IBM Journal of Research and Development
Finding needles in the haystack: search and candidate generation
IBM Journal of Research and Development
Textual evidence gathering and analysis
IBM Journal of Research and Development
Relation extraction and scoring in DeepQA
IBM Journal of Research and Development
Structured data and inference in DeepQA
IBM Journal of Research and Development
Special questions and techniques
IBM Journal of Research and Development
Identifying implicit relationships
IBM Journal of Research and Development
Fact-based question decomposition in DeepQA
IBM Journal of Research and Development
A framework for merging and ranking of answers in DeepQA
IBM Journal of Research and Development
Learning joint query interpretation and response ranking
Proceedings of the 22nd international conference on World Wide Web
Hi-index | 0.00 |
Many questions explicitly indicate the type of answer required. One popular approach to answering those questions is to develop recognizers to identify instances of common answer types (e.g., countries, animals, and food) and consider only answers on those lists. Such a strategy is poorly suited to answering questions from the Jeopardy!™ television quiz show. Jeopardy! questions have an extremely broad range of types of answers, and the most frequently occurring types cover only a small fraction of all answers. We present an alternative approach to dealing with answer types. We generate candidate answers without regard to type, and for each candidate, we employ a variety of sources and strategies to judge whether the candidate has the desired type. These sources and strategies provide a set of type coercion scores for each candidate answer. We use these scores to give preference to answers with more evidence of having the right type. Our question-answering system is significantly more accurate with type coercion than it is without type coercion; these components have a combined impact of nearly 5% on the accuracy of the IBM Watson™ question-answering system.