Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Pivoted document length normalization
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
Managing gigabytes (2nd ed.): compressing and indexing documents and images
Managing gigabytes (2nd ed.): compressing and indexing documents and images
Do batch and user evaluations give the same results?
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Why batch and user evaluations do not give the same results
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
User interface effects in past batch versus user experiments
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
Query quality: user ratings and system predictions
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Using query context models to construct topical search engines
Proceedings of the third symposium on Information interaction in context
A comparison of user and system query performance predictions
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
User evaluation of query quality
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.03 |
Recently the concept of a clarity score was introduced in order to measure the ambiguity of a query in relation to the collection in which the query issuer is seeking information [Cronen-Townsend et al. Proc. ACM SIGIR2002, Tampere Finland, August 2002]. If the query is expressed in the "same language" as the whole collection then it has a low clarity score, otherwise it has a high score, where the similarity is the relative entropy of the query and collection models. Cronen-Townsend et al. show that clarity scores correlate directly with average precision, hence a query with a high clarity score is likely to produce relevant documents high in a list of resulting documents. Other authors, however, have shown that high precision does not necessarily correlate with increased user performance. In this paper we examine the correlation between user performance and clarity score. Using log files from user experiments conducted within the framework of the TREC Interactive Track, we measure the clarity score of all user queries, and their actual performance on the searching task. Our results show that there is no correlation between the clarity of a query and user performance. The results also demonstrate that users were able to slightly improve their queries, so that subsequent queries had slightly higher clarity scores than initial queries, but this was not dependent on the quality of the system they used, nor the user's searching experience.