How reliable are the results of large-scale information retrieval experiments?
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
On Collection Size and Retrieval Effectiveness
Information Retrieval
Why do successful search systems fail for some topics
Proceedings of the 2007 ACM symposium on Applied computing
Effective and efficient user interaction for long queries
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Adapting information retrieval systems to user queries
Information Processing and Management: an International Journal
A study on performance volatility in information retrieval
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Measuring system performance and topic discernment using generalized adaptive-weight mean
Proceedings of the 18th ACM conference on Information and knowledge management
Query-based text normalization selection models for enhanced retrieval accuracy
SS '10 Proceedings of the NAACL HLT 2010 Workshop on Semantic Search
Selecting automatically the best query translations
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Improving effectiveness of query expansion using information theoretic approach
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part II
Updating broken web links: An automatic recommendation system
Information Processing and Management: an International Journal
Predicting query performance for fusion-based retrieval
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
The TREC robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad hoc retrieval task where the evaluation methodology emphasizes a system's least effective topics. The 2005 edition of the track used 50 topics that had been demonstrated to be difficult on one document collection, and ran those topics on a different document collection. Relevance information from the first collection could be exploited in producing a query for the second collection, if desired. As in previous years, the most effective retrieval strategy was to expand queries using terms derived from additional corpora. The relative difficulty of topics differed across the two document sets.