On term selection for query expansion
Journal of Documentation
The effect of adding relevance information in a relevance feedback environment
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Relevance based language models
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Improving accuracy in word class tagging through the combination of machine learning systems
Computational Linguistics
Questioning query expansion: an examination of behaviour and parameters
ADC '04 Proceedings of the 15th Australasian database conference - Volume 27
Adapting ranking SVM to document retrieval
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Exploring term selection for geographic blind feedback
Proceedings of the 4th ACM workshop on Geographical information retrieval
Selecting good expansion terms for pseudo-relevance feedback
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Introduction to Information Retrieval
Introduction to Information Retrieval
On the number of terms used in automatic query expansion
Information Retrieval
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Query expansion for language modeling using sentence similarities
IRFC'11 Proceedings of the Second international conference on Multidisciplinary information retrieval facility
Towards multilingual user models for Personalized Multilingual Information Retrieval
Proceedings of the First Workshop on Personalised Multilingual Hypertext Retrieval
Multi-platform image search using tag enrichment
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Personalised Information Retrieval: survey and classification
User Modeling and User-Adapted Interaction
Hi-index | 0.00 |
The classification of blind relevance feedback (BRF) terms described in this paper aims at increasing precision or recall by determining which terms decrease, increase or do not change the corresponding information retrieval (IR) performance metric. Classification and IR experiments are performed on the German and English GIRT data, using the BM25 retrieval model. Several basic memory-based classifiers are trained on different feature sets, grouping together features from different query expansion (QE) approaches. Combined classifiers employ the results of the basic classifiers and correctness predictions as features. The best combined classifiers for German (English) yield 22.9% (26.4%) and 5.8% (1.9%) improvement for term classification wrt. precision and recall compared to the best basic classifiers. IR experiments based on this term classification have also been performed. Filtering out different types of BRF terms shows that selecting feedback terms predicted to increase precision improves the average precision significantly compared to experiments without BRF. MAP is improved by +19.8% compared to the best standard BRF experiment (+11% for German). BRF term classification also increases the number of relevant and retrieved documents, geometric MAP, and P@10 in comparison to standard BRF. Experiments based on an optimal classification show that there is potential for improving IR effectiveness even more.