Question-answering by predictive annotation
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Building a question answering test collection
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Variations in relevance judgments and the measurement of retrieval effectiveness
Information Processing and Management: an International Journal
The TREC question answering track
Natural Language Engineering
The role of lexico-semantic feedback in open-domain textual question-answering
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Question answering using maximum entropy components
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Document concept lattice for text understanding and summarization
Information Processing and Management: an International Journal
IR system evaluation using nugget-based test collections
Proceedings of the fifth ACM international conference on Web search and data mining
Answers, not links: extracting tips from yahoo! answers to address how-to web queries
Proceedings of the fifth ACM international conference on Web search and data mining
Constructing test collections by inferring document relevance via extracted relevant information
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Traditional text retrieval systems return a ranked list of documents in response to a user's request. While a ranked list of documents can be an appropriate response for the user, frequently it is not. Usually it would be better for the system to provide the answer itself instead of requiring the user to search for the answer in a set of documents. The Text REtrieval Conference (TREC) is sponsoring a question answering "track" to foster research on the problem of retrieving answers rather than document lists.TREC is a workshop series sponsored by the National Institute of Standards and Technology and the U.S. Department of Defense [7]. The purpose of the conference series is to encourage research on text retrieval for realistic applications by providing large test collections, uniform scoring procedures, and a forum for organizations interested in comparing results. The conference has focused primarily on the traditional IR problem of retrieving a ranked list of documents in response to a statement of information need, but has also included other tasks, called tracks, that focus on new areas or particularly difficult aspects of information retrieval. A question answering track was introduced in TREC-8 1999. The track has generated wide-spread interest in the QA problem [2, 3, 4], and has documented significant improvements in question answering system effectiveness in its two-year history.This paper provides a brief summary of the findings of the TREC question answering track to date and discusses the future directions of the track. The paper is extracted from a fuller description of the track given in "The TREC Question Answering Track" [8]. Complete details about the TREC question answering track can be found in the TREC proceedings.