When will information retrieval be "good enough"?
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
User performance versus precision measures for simple search tasks
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Beyond DCG: user behavior as a predictor of a successful search
Proceedings of the third ACM international conference on Web search and data mining
Proceedings of the 73rd ASIS&T Annual Meeting on Navigating Streams in an Information Ecosystem - Volume 47
Find it if you can: a game for modeling different types of web search success using interaction data
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
A Specialized Search Assistant for Learning Objects
ACM Transactions on the Web (TWEB)
Human question answering performance using an interactive document retrieval system
Proceedings of the 4th Information Interaction in Context Symposium
Journal of Web Engineering
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
We consider experiments to measure the quality of a web search algorithm based on how much total time users take to complete assigned search tasks using that algorithm. We first analyze our data to verify that there is in fact a negative relationship between a user's total search time and a user's satisfaction for the types of tasks under consideration. Secondly, we fit a model with the user's total search time as the response to compare two different search algorithms. Finally, we propose an alternative experimental design which we demonstrate to be a substantial improvement over our current design in terms of variance reduction and efficiency.