Information filtering and information retrieval: two sides of the same coin?
Communications of the ACM - Special issue on information filtering
Evaluating document clustering for interactive information retrieval
Proceedings of the tenth international conference on Information and knowledge management
Information Retrieval
Implicit feedback for inferring user preference: a bibliography
ACM SIGIR Forum
The perfect search engine is not enough: a study of orienteering behavior in directed search
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Understanding implicit feedback and document preference: a naturalistic user study
Understanding implicit feedback and document preference: a naturalistic user study
Evaluating implicit feedback models using searcher simulations
ACM Transactions on Information Systems (TOIS)
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
A study on the effects of personalization and task information on implicit feedback performance
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Using similarity links as shortcuts to relevant web pages
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
How do users find things with PubMed?: towards automatic utility evaluation with user simulations
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Some(what) grand challenges for information retrieval
ACM SIGIR Forum
On test collections for adaptive information retrieval
Information Processing and Management: an International Journal
Evaluating subtopic retrieval methods: Clustering versus diversification of search results
Information Processing and Management: an International Journal
SCAMP: a tool for conducting interactive information retrieval experiments
Proceedings of the 4th Information Interaction in Context Symposium
Summaries, ranked retrieval and sessions: a unified framework for information access evaluation
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.00 |
The aim of an Information Retrieval (IR) application is to support the user accessing relevant information effectively and efficiently. It is well known that system performance, in terms of finding relevant information is heavily dependent upon the IR application (i.e. the IR system exposed through the application's interface), as well as how the application is used by the user (i.e. how the user interacts with the system through the interface). Thus, a very pragmatic evaluation question that arises at the application level is: what is the effectiveness experienced by the user during the usage of the application? To be able to answer this question, we represent the usage of an application by the stream of documents the user encounters while interacting with the application. This representation enables us to monitor and track the performance over time and usage. By taking a stream-based, time-centric view of the IR process, instead of a rank-list, topic/task centric view, the evaluation can be performed on any IR based application. To illustrate the difference and the utility of this approach, we demonstrate how a new suite of usage based effectiveness measures can be applied. This work provides the conceptual foundations for measuring, monitoring and modeling the performance of any IR application which needs to be evaluated over time and in context.