Selectively diversifying web search results
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Re-ranking search results using an additional retrieved list
Information Retrieval
User perspectives on query difficulty
ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
A unified framework for post-retrieval query-performance prediction
ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
Navigating the user query space
SPIRE'11 Proceedings of the 18th international conference on String processing and information retrieval
From "identical" to "similar": fusing retrieved lists based on inter-document similarities
Journal of Artificial Intelligence Research
Predicting Query Performance by Query-Drift Estimation
ACM Transactions on Information Systems (TOIS)
Automatically detecting the quality of the query and its implications in IR-based concept location
ASE '11 Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering
Evaluating the specificity of text retrieval queries to support software engineering tasks
Proceedings of the 34th International Conference on Software Engineering
User evaluation of query quality
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Query performance prediction for IR
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Automatic query performance assessment during the retrieval of software artifacts
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
Predicting query performance for fusion-based retrieval
Proceedings of the 21st ACM international conference on Information and knowledge management
Back to the roots: a probabilistic framework for query-performance prediction
Proceedings of the 21st ACM international conference on Information and knowledge management
Predicting the performance of passage retrieval for question answering
Proceedings of the 21st ACM international conference on Information and knowledge management
Query-performance prediction and cluster ranking: two sides of the same coin
Proceedings of the 21st ACM international conference on Information and knowledge management
On the usefulness of query features for learning to rank
Proceedings of the 21st ACM international conference on Information and knowledge management
Estimating query difficulty for news prediction retrieval
Proceedings of the 21st ACM international conference on Information and knowledge management
Efficient and effective retrieval using selective pruning
Proceedings of the sixth ACM international conference on Web search and data mining
Using document-quality measures to predict web-search effectiveness
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Estimating query representativeness for query-performance prediction
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Shame to be sham: addressing content-based grey hat search engine optimization
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Learning to combine representations for medical records search
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Automatic query reformulations for text retrieval in software engineering
Proceedings of the 2013 International Conference on Software Engineering
Query quality prediction and reformulation for source code search: the refoqus tool
Proceedings of the 2013 International Conference on Software Engineering
Query quality prediction and reformulation for source code search: the refoqus tool
Proceedings of the 2013 International Conference on Software Engineering
Query-Performance Prediction Using Minimal Relevance Feedback
Proceedings of the 2013 Conference on the Theory of Information Retrieval
Increasing evaluation sensitivity to diversity
Information Retrieval
ACM Transactions on the Web (TWEB)
Hi-index | 0.00 |
Many information retrieval (IR) systems suffer from a radical variance in performance when responding to users' queries. Even for systems that succeed very well on average, the quality of results returned for some of the queries is poor. Thus, it is desirable that IR systems will be able to identify "difficult" queries so they can be handled properly. Understanding why some queries are inherently more difficult than others is essential for IR, and a good answer to this important question will help search engines to reduce the variance in performance, hence better servicing their customer needs. Estimating the query difficulty is an attempt to quantify the quality of search results retrieved for a query from a given collection of documents. This book discusses the reasons that cause search engines to fail for some of the queries, and then reviews recent approaches for estimating query difficulty in the IR field. It then describes a common methodology for evaluating the prediction quality of those estimators, and experiments with some of the predictors applied by various IR methods over several TREC benchmarks. Finally, it discusses potential applications that can utilize query difficulty estimators by handling each query individually and selectively, based upon its estimated difficulty. Table of Contents: Introduction - The Robustness Problem of Information Retrieval / Basic Concepts / Query Performance Prediction Methods / Pre-Retrieval Prediction Methods / Post-Retrieval Prediction Methods / Combining Predictors / A General Model for Query Difficulty / Applications of Query Difficulty Estimation / Summary and Conclusions