Evaluation measures for interactive information retrieval
Information Processing and Management: an International Journal - Special issue on evaluation issues in information retrieval
Time, relevance and interaction modelling for information retrieval
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Patterns of entry and correction in large vocabulary continuous speech recognition systems
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Real life, real users, and real needs: a study and analysis of user queries on the web
Information Processing and Management: an International Journal
Do batch and user evaluations give the same results?
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Why batch and user evaluations do not give the same results
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Liberal relevance criteria of TREC -: counting on negligible documents?
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
The Psychology of Human-Computer Interaction
The Psychology of Human-Computer Interaction
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
The Turn: Integration of Information Seeking and Retrieval in Context (The Information Retrieval Series)
User performance versus precision measures for simple search tasks
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Deciphering Trends in Mobile Search
Computer
Semantic components enhance retrieval of domain-specific documents
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
User adaptation: good results from poor systems
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Identifying clusters of user behavior in intranet search engine log files
Journal of the American Society for Information Science and Technology
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
Interactive information retrieval
Annual Review of Information Science and Technology
The economics in interactive information retrieval
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
'Natural' search user interfaces
Communications of the ACM
Use cases as a component of information access evaluation
Proceedings of the 2011 workshop on Data infrastructurEs for supporting information retrieval evaluation
User-Oriented evaluation in IR
PROMISE'12 Proceedings of the 2012 international conference on Information Retrieval Meets Information Visualization
Summaries, ranked retrieval and sessions: a unified framework for information access evaluation
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
How query cost affects search behavior
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Modeling behavioral factors ininteractive information retrieval
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
Real life information retrieval takes place in sessions, where users search by iterating between various cognitive, perceptual and motor subtasks through an interactive interface. The sessions may follow diverse strategies, which, together with the interface characteristics, affect user effort (cost), experience and session effectiveness. In this paper we propose a pragmatic evaluation approach based on scenarios with explicit subtask costs. We study the limits of effectiveness of diverse interactive searching strategies in two searching environments (the scenarios) under overall cost constraints. This is based on a comprehensive simulation of 20 million sessions in each scenario. We analyze the effectiveness of the session strategies over time, and the properties of the most and the least effective sessions in each case. Furthermore, we will also contrast the proposed evaluation approach with the traditional one, rank based evaluation, and show how the latter may hide essential factors that affect users' performance and satisfaction - and gives even counter-intuitive results.