An evaluation of retrieval effectiveness for a full-text document-retrieval system
Communications of the ACM
Measuring usability: preference vs. performance
Communications of the ACM
Bringing order to the Web: automatically categorizing search results
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Do batch and user evaluations give the same results?
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Why batch and user evaluations do not give the same results
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Information Processing and Management: an International Journal
The concept of relevance in IR
Journal of the American Society for Information Science and Technology
Journal of the American Society for Information Science and Technology
WiIRE: the web interactive information retrieval experimentation system prototype
Information Processing and Management: an International Journal
Evaluating implicit measures to improve web search
ACM Transactions on Information Systems (TOIS)
When will information retrieval be "good enough"?
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
User performance versus precision measures for simple search tasks
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Evaluation by comparing result sets in context
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Web Search: Public Searching of the Web (Information Science and Knowledge Management)
Web Search: Public Searching of the Web (Information Science and Knowledge Management)
Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search
ACM Transactions on Information Systems (TOIS)
What are you looking for?: an eye-tracking study of information usage in web search
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Meta-analysis of correlations among usability measures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How well does result relevance predict session satisfaction?
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
The relationship between IR effectiveness measures and user satisfaction
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Controlling the complexity in comparing search user interfaces via user studies
Information Processing and Management: an International Journal
User rankings of search engine results
Journal of the American Society for Information Science and Technology
Effects of performance feedback on users' evaluations of an interactive IR system
Proceedings of the second international symposium on Information interaction in context
Proceedings of the 73rd ASIS&T Annual Meeting on Navigating Streams in an Information Ecosystem - Volume 47
Multidimensional relevance: Prioritized aggregation in a personalized Information Retrieval setting
Information Processing and Management: an International Journal
Hi-index | 0.00 |
Information retrieval research has demonstrated that system performance does not always correlate positively with user performance, and that users often assign positive evaluation scores to search systems even when they are unable to complete tasks successfully. This research investigated the relationship between objective measures of system performance and users' perceptions of that performance. In this study, subjects evaluated the performance of four search systems whose search results were manipulated systematically to produce different orderings and numbers of relevant documents. Three laboratory studies were conducted with a total of eighty-one subjects. The first two studies investigated the effect of the order of five relevant and five nonrelevant documents in a search results list containing ten results on subjects' evaluations. The third study investigated the effect of varying the number of relevant documents in a search results list containing ten results on subjects' evaluations. Results demonstrate linear relationships between subjects' evaluations and the position of relevant documents in a search results list and the total number of relevant documents retrieved. Of the two, number of relevant documents retrieved was a stronger predictor of subjects' evaluation ratings and resulted in subjects using a greater range of evaluation scores.