An Algorithm that Learns What‘s in a Name
Machine Learning - Special issue on natural language learning
Natural language question answering: the view from here
Natural Language Engineering
A maximum entropy approach to identifying sentence boundaries
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Question answering passage retrieval using dependency relations
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Building a reusable test collection for question answering
Journal of the American Society for Information Science and Technology - Research Articles
Question answering based on semantic structures
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
A probabilistic graphical model for joint answer ranking in question answering
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Structured retrieval for question answering
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Adding predicate argument structure to the Penn TreeBank
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Structural relationships for large-scale learning of answer re-ranking
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Textual evidence gathering and analysis
IBM Journal of Research and Development
Hi-index | 0.00 |
Question Answering (QA) systems are often built modularly, with a text retrieval component feeding forward into an answer extraction component. Conventional wisdom suggests that, the higher the quality of the retrieval results used as input to the answer extraction module, the better the extracted answers, and hence system accuracy, will be. This turns out to be a poor assumption, because text retrieval and answer extraction are tightly coupled. Improvements in retrieval quality can be lost at the answer extraction module, which can not necessarily recognize the additional answer candidates provided by improved retrieval. Going forward, to improve accuracy on the QA task, systems will need greater coordination between text retrieval and answer extraction modules.