Towards interactive query expansion
SIGIR '88 Proceedings of the 11th annual international ACM SIGIR conference on Research and development in information retrieval
Query expansion using local and global document analysis
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
Relevance based language models
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Using terminological feedback for web search refinement: a log-based study
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Re-examining the potential effectiveness of interactive query expansion
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
The NRRC reliable information access (RIA) workshop
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
A framework for selective query expansion
Proceedings of the thirteenth ACM international conference on Information and knowledge management
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
An information-pattern-based approach to novelty detection
Information Processing and Management: an International Journal
Effective and efficient user interaction for long queries
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Dynamic active probing of helpdesk databases
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
We explore interactive methods to further improve the performance of pseudo-relevance feedback. Studies \citeria suggest that new methods for tackling difficult queries are required. Our approach is to gather more information about the query from the user by asking her simple questions. The equally simple responses are used to modify the original query. Our experiments using the TREC Robust Track queries show that we can obtain a significant improvement in mean average precision averaging around 5% over pseudo-relevance feedback. This improvement is also spread across more queries compared to ordinary pseudo-relevance feedback, as suggested by geometric mean average precision.