An evaluation of retrieval effectiveness for a full-text document-retrieval system
Communications of the ACM
Evaluating interactive systems in TREC
Journal of the American Society for Information Science - Special issue: evaluation of information retrieval systems
From highly relevant to not relevant: examining different regions of relevance
Information Processing and Management: an International Journal
Finding information on the World Wide Web: the retrieval effectiveness of search engines
Information Processing and Management: an International Journal
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Variations in relevance judgments and the measurement of retrieval effectiveness
Information Processing and Management: an International Journal
The TREC interactive track: an annotated bibliography
Information Processing and Management: an International Journal - Special issue on interactivity at the text retrieval conference (TREC)
Evaluation by highly relevant documents
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Liberal relevance criteria of TREC -: counting on negligible documents?
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
Relevance judgments between TREC and Non-TREC assessors
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
Proceedings of the 18th ACM conference on Information and knowledge management
Metric and Relevance Mismatch in Retrieval Evaluation
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
Simulating simple and fallible relevance feedback
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Searching for relevance in the relevance of search
CoLIS'05 Proceedings of the 5th international conference on Context: conceptions of Library and Information Sciences
Interactive searching behavior with structured XML documents
INEX'04 Proceedings of the Third international conference on Initiative for the Evaluation of XML Retrieval
ECIR'06 Proceedings of the 28th European conference on Advances in Information Retrieval
Modeling behavioral factors ininteractive information retrieval
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Effectiveness of search result classification based on relevance feedback
Journal of Information Science
Hi-index | 0.00 |
In this paper, we focus on the effect of graded relevance on the results of interactive information retrieval (IR) experiments based on assigned search tasks in a test collection. A group of 26 subjects searched for four Text REtrieval Conference (TREC) topics using automatic and interactive query expansion based on relevance feedback. The TREC-and user-suggested pools of relevant documents were reassessed on a four-level relevance scale. The results show that the users could identify nearly all highly relevant documents and about half of the marginal ones. Users also selected a fair number of irrelevant documents for query expansion. The findings suggest that the effectiveness of query expansion is closely related to the searchers' success in retrieving and identifying highly relevant documents for feedback. The implications of the results on interpreting the findings of past experiments with liberal relevance thresholds are also discussed.