Ranking robustness: a novel framework to predict query performance
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Information Systems
Identifying ambiguous queries in web search
Proceedings of the 16th international conference on World Wide Web
Quantify query ambiguity using ODP metadata
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Towards robust query expansion: model selection in the language modeling framework
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Improved query difficulty prediction for the web
Proceedings of the 17th ACM conference on Information and knowledge management
To swing or not to swing: learning when (not) to advertise
Proceedings of the 17th ACM conference on Information and knowledge management
Precomputing search features for fast and accurate query classification
Proceedings of the third ACM international conference on Web search and data mining
Classification-enhanced ranking
Proceedings of the 19th international conference on World wide web
Combining pre-retrieval query quality predictors using genetic programming
Applied Intelligence
Hi-index | 0.00 |
We investigate using topic prediction data, as a summary of document content, to compute measures of search result quality. Unlike existing quality measures such as query clarity that require the entire content of the top-ranked results, class-based statistics can be computed efficiently online, because class information is compact enough to precompute and store in the index. In an empirical study we compare the performance of class-based statistics to their language-model counterparts for predicting two measures: query difficulty and expansion risk. Our findings suggest that using class predictions can offer comparable performance to full language models while reducing computation overhead.