Advantages of query biased summaries in information retrieval
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Applying summarization techniques for term selection in relevance feedback
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Liberal relevance criteria of TREC -: counting on negligible documents?
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
A survey on the use of relevance feedback for information access systems
The Knowledge Engineering Review
The influence of relevance levels on the effectiveness of interactive information retrieval
Journal of the American Society for Information Science and Technology
s-grams: Defining generalized n-grams for information retrieval
Information Processing and Management: an International Journal
Including summaries in system evaluation
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Simulating simple and fallible relevance feedback
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Query expansion for language modeling using sentence similarities
IRFC'11 Proceedings of the Second international conference on Multidisciplinary information retrieval facility
Pattern Recognition Letters
Effectiveness of search result classification based on relevance feedback
Journal of Information Science
Hi-index | 0.00 |
Research on relevance feedback (RFB) in information retrieval (IR) has given mixed results. Success in RFB seems to depend on the searcher's willingness to provide feedback and ability to identify relevant documents or query keys. The paper is based on simulating many user scenarios regarding the amount and quality of RFB. In addition, we experiment with query-biased sentence extraction for query reformulation. The baselines are initial no-feedback queries and queries based on pseudo-relevance feedback. The core question is: under which conditions would RFB based on sentence extraction be successful? The answer depends on user's behavior, implementation of feedback query formulation, and the evaluation methods. A small amount of feedback from a short browsing window seems to improve the final ranking the most. Longer browsing allows more feedback and better queries but also consumes the available relevant documents.