Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Why batch and user evaluations do not give the same results
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
When will information retrieval be "good enough"?
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Cross-Evaluation: A new model for information system evaluation
Journal of the American Society for Information Science and Technology
User performance versus precision measures for simple search tasks
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
A model for quantitative evaluation of an end-to-end question-answering system
Journal of the American Society for Information Science and Technology
Explaining User Performance in Information Retrieval: Challenges to IR Evaluation
ICTIR '09 Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
The good, the bad, and the random: an eye-tracking study of ad quality in web search
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Do user preferences and evaluation measures line up?
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Human performance and retrieval precision revisited
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Evaluating interfaces for government metasearch
Proceedings of the third symposium on Information interaction in context
Web search solved?: all result rankings the same?
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
A comparison of user and system query performance predictions
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
The economics in interactive information retrieval
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Find it if you can: a game for modeling different types of web search success using interaction data
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
The effect of user characteristics on search effectiveness in information retrieval
Information Processing and Management: an International Journal
IR research: systems, interaction, evaluation and theories
ACM SIGIR Forum
Time drives interaction: simulating sessions in diverse searching environments
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Amount of invested mental effort (AIME) in online searching
Information Processing and Management: an International Journal
Modeling user variance in time-biased gain
Proceedings of the Symposium on Human-Computer Interaction and Information Retrieval
Explaining difficulty navigating a website using page view data
Proceedings of the Seventeenth Australasian Document Computing Symposium
How query cost affects search behavior
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Modeling behavioral factors ininteractive information retrieval
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Choices in batch information retrieval evaluation
Proceedings of the 18th Australasian Document Computing Symposium
On user behaviour adaptation under interface change
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.00 |
Several recent studies have found only a weak relationship between the performance of a retrieval system and the "success" achievable by human searchers. We hypothesize that searchers are successful precisely because they alter their behavior. To explore the possible causal relation between system performance and search behavior, we control system performance, hoping to elicit adaptive search behaviors. 36 subjects each completed 12 searches using either a standard system or one of two degraded systems. Using a general linear model, we isolate the main effect of system performance, by measuring and removing main effects due to searcher variation, topic difficulty, and the position of each search in the time series. We find that searchers using our degraded systems are as successful as those using the standard system, but that, in achieving this success, they alter their behavior in ways that could be measured, in real time, by a suitably instrumented system. Our findings suggest, quite generally, that some aspects of behavioral dynamics may provide unobtrusive indicators of system performance.