Term-weighting approaches in automatic text retrieval
Information Processing and Management: an International Journal
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Automatic text processing: the transformation, analysis, and retrieval of information by computer
On term selection for query expansion
Journal of Documentation
SIGIR '92 Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval
Relevance feedback and inference networks
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
A user-centred evaluation of ranking algorithms for interactive query expansion
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
The effect of adding relevance information in a relevance feedback environment
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Pivoted document length normalization
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
Incremental relevance feedback for information filtering
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
The effect of accessing nonmatching documents on relevance feedback
ACM Transactions on Information Systems (TOIS)
How reliable are the results of large-scale information retrieval experiments?
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Variations in relevance judgments and the measurement of retrieval effectiveness
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
A Statistical Model for Relevance Feedback in Information Retrieval
Journal of the ACM (JACM)
A probabilistic model of information retrieval: development and comparative experiments
Information Processing and Management: an International Journal
A probabilistic model of information retrieval: development and comparative experiments Part 2
Information Processing and Management: an International Journal
An information-theoretic approach to automatic query expansion
ACM Transactions on Information Systems (TOIS)
The effect of pool depth on system evaluation in TREC
Journal of the American Society for Information Science and Technology
Selecting expansion terms in automatic query expansion
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
The effect of topic set size on retrieval experiment error
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Improving retrieval feedback with multiple term-ranking function combination
ACM Transactions on Information Systems (TOIS)
A test of genetic algorithms in relevance feedback
Information Processing and Management: an International Journal
Genetic algorithms in relevance feedback: a second test and new contributions
Information Processing and Management: an International Journal
Enhanced web document retrieval using automatic query expansion
Journal of the American Society for Information Science and Technology
A survey on the use of relevance feedback for information access systems
The Knowledge Engineering Review
Tuning before feedback: combining ranking discovery and blind feedback for robust retrieval
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Comparison of using passages and documents for blind relevance feedback in information retrieval
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Optimization of some factors affecting the performance of query expansion
Information Processing and Management: an International Journal
The SMART Retrieval System—Experiments in Automatic Document Processing
The SMART Retrieval System—Experiments in Automatic Document Processing
Adapting pivoted document-length normalization for query size: Experiments in Chinese and English
ACM Transactions on Asian Language Information Processing (TALIP)
A retrospective study of a hybrid document-context based retrieval model
Information Processing and Management: an International Journal
On rank-based effectiveness measures and optimization
Information Retrieval
Building a framework for the probability ranking principle by a family of expected weighted rank
ACM Transactions on Information Systems (TOIS)
A Survey of Automatic Query Expansion in Information Retrieval
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
This paper presents an investigation about how to automatically formulate effective queries using full or partial relevance information (i.e., the terms that are in relevant documents) in the context of relevance feedback (RF). The effects of adding relevance information in the RF environment are studied via controlled experiments. The conditions of these controlled experiments are formalized into a set of assumptions that form the framework of our study. This framework is called idealized relevance feedback (IRF) framework. In our IRF settings, we confirm the previous findings of relevance feedback studies. In addition, our experiments show that better retrieval effectiveness can be obtained when (i) we normalize the term weights by their ranks, (ii) we select weighted terms in the top K retrieved documents, (iii) we include terms in the initial title queries, and (iv) we use the best query sizes for each topic instead of the average best query size where they produce at most five percentage points improvement in the mean average precision (MAP) value. We have also achieved a new level of retrieval effectiveness which is about 55-60% MAP instead of 40+% in the previous findings. This new level of retrieval effectiveness was found to be similar to a level using a TREC ad hoc test collection that is about double the number of documents in the TREC-3 test collection used in previous works.