Linear discriminant model for information retrieval
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Performance confidence estimation for automatic summarization
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Use of Multiword Terms and Query Expansion for Interactive Information Retrieval
Advances in Focused Retrieval
Automatic evaluation of linguistic quality in multi-document summarization
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
FIDJI: using syntax for validating answers in multiple documents
Information Retrieval
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
Combining language models with NLP and interactive query expansion
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
ACM SIGIR Forum
The REG summarization system with question reformulation at QA@INEX track 2010
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
Utilizing sub-topical structure of documents for information retrieval
Proceedings of the 4th workshop on Workshop for Ph.D. students in information & knowledge management
ACM SIGIR Forum
Hi-index | 0.02 |
The INEX Question Answering track (QA@INEX) aims to evaluate a complex question-answering task using the Wikipedia. The set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents. Long answers have been evaluated based on Kullback Leibler (KL) divergence between n-gram distributions. This allowed summarization systems to participate. Most of them generated a readable extract of sentences from top ranked documents by a state-of-the-art document retrieval engine. Participants also tested several methods of question disambiguation. Evaluation has been carried out on a pool of real questions from OverBlog and Yahoo! Answers. Results tend to show that the baseline-restricted focused IR system minimizes KL divergence but misses readability meanwhile summarization systems tend to use longer and standalone sentences thus improving readability but increasing KL divergence.