Do clarity scores for queries correlate with user performance?
ADC '04 Proceedings of the 15th Australasian database conference - Volume 27
Exploiting Query Repetition and Regularity in an Adaptive Community-Based Web Search Engine
User Modeling and User-Adapted Interaction
CoSearch: a system for co-located collaborative web search
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Do online reviews matter? - An empirical investigation of panel data
Decision Support Systems
A survey of pre-retrieval query performance predictors
Proceedings of the 17th ACM conference on Information and knowledge management
Search shortcuts: a new approach to the recommendation of queries
Proceedings of the third ACM conference on Recommender systems
Effects of popularity and quality on the usage of query suggestions during information search
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Mining Query Logs: Turning Search Usage Data into Knowledge
Foundations and Trends in Information Retrieval
The good, the bad, and the random: an eye-tracking study of ad quality in web search
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Query quality: user ratings and system predictions
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Estimating the Query Difficulty for Information Retrieval
Estimating the Query Difficulty for Information Retrieval
A comparison of user and system query performance predictions
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
User perspectives on query difficulty
ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
Collaborative Filtering Recommender Systems
Foundations and Trends in Human-Computer Interaction
To switch or not to switch: understanding social influence in online choices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
Although a great deal of research has been conducted about automatic techniques for determining query quality, there have been relatively few studies about how people judge query quality. This study investigated this topic through a laboratory experiment with 40 subjects. Subjects were shown eight information problems (five fact-finding and three exploratory) and asked to evaluate queries for these problems according to several quality attributes. Subjects then evaluated search engine results pages (SERPs) for each query, which were manipulated to exhibit different levels of performance. Following this, subjects reevaluated the queries, were interviewed about their evaluation approaches and repeated the rating procedure for two information problems. Results showed that for fact-finding information problems, longer queries received higher ratings (both initial and post-SERP), and that post-SERP query ratings were more affected by the proportion of relevant documents viewed to all documents viewed rather than the ranks of the relevant documents. For exploratory information problems, subjects' ratings were highly correlated with the number of relevant documents in the SERP as well as the proportion of relevant documents viewed. Subjects adopted several approaches when evaluating query quality, which led to different quality ratings. Finally, during the reliability check subjects' initial evaluations were fairly stable, but their post-SERP evaluations significantly increased.