Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Question-answering by predictive annotation
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Exploiting redundancy in question answering
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
High performance question/answering
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Ranking suspected answers to natural language questions using predictive annotation
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
Answering what-is questions by Virtual Annotation
HLT '01 Proceedings of the first international conference on Human language technology research
Hi-index | 0.00 |
The ability to evaluate intermediate results in a Question Answering (QA) system, which we call introspection, is necessary in architectures based on planning or on processing loops. In particular, it is needed to determine if an earlier phase must be retried, or if the response "No Answer" must be offered. We look at an introspection task of performing a cursory evaluation of the search engine output in a QA system. We define this task as a concept-learning problem and evaluate two classifiers that use features based on score progression in the ranked list returned by the search engine and candidate answer types. Our experiments showed promising results, achieving 25% relative improvement over a majority class baseline on unseen data.