Towards interactive query expansion
SIGIR '88 Proceedings of the 11th annual international ACM SIGIR conference on Research and development in information retrieval
Experiments with query acquisition and use in document retrieval systems
SIGIR '90 Proceedings of the 13th annual international ACM SIGIR conference on Research and development in information retrieval
Term relevance feedback and query expansion: relation to design
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
A case for interaction: a study of interactive information retrieval behavior and effectiveness
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The potential and actual effectiveness of interactive query expansion
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Interactive query expansion: a user-based evaluation in a relevance feedback environment
Journal of the American Society for Information Science
Information Processing and Management: an International Journal - Special issue on interactivity at the text retrieval conference (TREC)
Using clustering and classification approaches in interactive retrieval
Information Processing and Management: an International Journal - Special issue on interactivity at the text retrieval conference (TREC)
TREC interactive with Chesire II
Information Processing and Management: an International Journal - Special issue on interactivity at the text retrieval conference (TREC)
Information Processing and Management: an International Journal - Special issue on interactivity at the text retrieval conference (TREC)
Hierarchical presentation of expansion terms
Proceedings of the 2002 ACM symposium on Applied computing
Using terminological feedback for web search refinement: a log-based study
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Query length in interactive information retrieval
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Re-examining the potential effectiveness of interactive query expansion
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Journal of the American Society for Information Science and Technology
Evaluation of the real and perceived value of automatic and interactive query expansion
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
The loquacious user: a document-independent source of terms for query expansion
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Web Search: Public Searching of the Web
Web Search: Public Searching of the Web
Evaluating sources of query expansion terms
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Term feedback for information retrieval with language models
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Making mind and machine meet: a study of combining cognitive and algorithmic relevance feedback
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Information Processing and Management: an International Journal
Toward automatic facet analysis and need negotiation: Lessons from mediated search
ACM Transactions on Information Systems (TOIS)
A comparison of query and term suggestion features for interactive searching
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Undergraduates' evaluations of assigned search topics
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Information Technology and Management
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
Interactive retrieval based on faceted feedback
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Identifying queries in the wild, wild web
Proceedings of the third symposium on Information interaction in context
Developing a detailed view of query reformulation: one step in an incremental approach
Proceedings of the 73rd ASIS&T Annual Meeting on Navigating Streams in an Information Ecosystem - Volume 47
Filtering semi-structured documents based on faceted feedback
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Interactive sense feedback for difficult queries
Proceedings of the 20th ACM international conference on Information and knowledge management
Information vs interaction: examining different interaction models over consistent metadata
Proceedings of the 4th Information Interaction in Context Symposium
Directing exploratory search: reinforcement learning from user interactions with keywords
Proceedings of the 2013 international conference on Intelligent user interfaces
Directing exploratory search with interactive intent modeling
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
Term relevance feedback has had a long history in information retrieval. However, research on interactive term relevance feedback has yielded mixed results. In this paper, we investigate several aspects related to the elicitation of term relevance feedback: the display of document surrogates, the technique for identifying or selecting terms, and sources of expansion terms. We conduct a between subjects experiment (n=61) of three term relevance feedback interfaces using the 2005 TREC HARD collection, and evaluate each interface with respect to query length and retrieval performance. Results demonstrate that queries created with each experimental interface significantly outperformed corresponding baseline queries, even though there were no differences in performance between interface conditions. Results also demonstrate that pseudo-relevance feedback runs outperformed both baseline and experimental runs as assessed by recall-oriented measures, but that user-generated terms improved precision.