SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
A study of methods for negative relevance feedback
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Diagnostic Evaluation of Information Retrieval Models
ACM Transactions on Information Systems (TOIS)
Proceedings of the 20th ACM international conference on Information and knowledge management
Towards construction of evaluation framework for query expansion
AIRS'05 Proceedings of the Second Asia conference on Asia Information Retrieval Technology
Evaluating topic difficulties from the viewpoint of query term expansion
AIRS'06 Proceedings of the Third Asia conference on Information Retrieval Technology
Query aspect based term weighting regularization in information retrieval
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
Hi-index | 0.00 |
Current statistical approaches to IR have shown themselves to be effective and reliable in both research and commercial settings. However, experimental environments such as TREC show that retrieval results vary widely according to both topic (question asked) and system. This is true for both the basic IR systems and for any of the more advanced implementations using, for example, query expansion. Some retrieval approaches work well on one topic but poorly on a second, while other approaches may work poorly on the first topic, but succeed on the second. If it could be determined in advance which approach would work well, then a guided approach could strongly improve performance. Unfortunately, despite many efforts no one knows how to choose good approaches on a per topic basis.