Automatic text processing: the transformation, analysis, and retrieval of information by computer
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Incremental relevance feedback
SIGIR '92 Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval
Interaction in information retrieval: selection and effectiveness of search terms
Journal of the American Society for Information Science
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
The data-document distinction in information retrieval
Communications of the ACM
Evaluation by highly relevant documents
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Liberal relevance criteria of TREC -: counting on negligible documents?
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
A survey on the use of relevance feedback for information access systems
The Knowledge Engineering Review
The influence of relevance levels on the effectiveness of interactive information retrieval
Journal of the American Society for Information Science and Technology
TIPSTER '93 Proceedings of a workshop on held at Fredericksburg, Virginia: September 19-23, 1993
Using controlled query generation to evaluate blind relevance feedback algorithms
Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries
Semantic components enhance retrieval of domain-specific documents
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
ECIR'06 Proceedings of the 28th European conference on Advances in Information Retrieval
Proceedings of the 18th ACM conference on Information and knowledge management
Conceptual language models for domain-specific retrieval
Information Processing and Management: an International Journal
Simulating simple and fallible relevance feedback
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Pattern Recognition Letters
Model for simulating result document browsing in focused retrieval
Proceedings of the 4th Information Interaction in Context Symposium
Effectiveness of search result classification based on relevance feedback
Journal of Information Science
Hi-index | 0.00 |
We propose a method for performing evaluation of relevance feedback based on simulating real users. The user simulation applies a model defining the user's relevance threshold to accept individual documents as feedback in a graded relevance environment; user's patience to browse the initial list of retrieved documents; and his/her effort in providing the feedback. We evaluate the result by using cumulated gain-based evaluation together with freezing all documents seen by the user in order to simulate the point of view of a user who is browsing the documents during the retrieval process. We demonstrate the method by performing a simulation in the laboratory setting and present the "branching" curve sets characteristic for the presented evaluation method. Both the average and topic-by-topic results indicate that if the freezing approach is adopted, giving feedback of mixed quality makes sense for various usage scenarios even though the modeled users prefer finding especially the most relevant documents.