Information storage and retrieval
Information storage and retrieval
Blind Men and Elephants: Six Approaches to TREC data
Information Retrieval
Information Retrieval
Query performance analyser -: a web-based tool for IR research and instruction
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
INFOVIS '04 Proceedings of the IEEE Symposium on Information Visualization
A rank-by-feature framework for interactive exploration of multidimensional data
Information Visualization
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Learning to Rank for Information Retrieval
Foundations and Trends in Information Retrieval
Overview of the Reliable Information Access Workshop
Information Retrieval
ACM Transactions on Computer-Human Interaction (TOCHI)
Constant density displays using diversity sampling
INFOVIS'03 Proceedings of the Ninth annual IEEE conference on Information visualization
Hi-index | 0.00 |
Measuring is a key to scientific progress. This is particularly true for research concerning complex systems, whether natural or human-built. Multilingual and multimedia information access systems, such as search engines, are increasingly complex: they need to satisfy diverse user needs and support challenging tasks. Their development calls for proper evaluation methodologies to ensure that they meet the expected user requirements and provide the desired effectiveness. In this context, failure analysis is crucial to understand the behaviour of complex systems. Unfortunately, this is an especially challenging activity, requiring vast amounts of human effort to inspect query-by-query the output of a system in order to understand what went well or bad. It is therefore fundamental to provide automated tools to examine system behaviour, both visually and analytically. Moreover, once you understand the reason behind a failure, you still need to conduct a "what-if" analysis to understand what among the different possible solutions is most promising and effective before actually starting to modify your system. This paper provides an analytical model for examining performances of IR systems, based on the discounted cumulative gain family of metrics, and visualization for interacting and exploring the performances of the system under examination. Moreover, we propose machine learning approach to learn the ranking model of the examined system in order to be able to conduct a "what-if" analysis and visually explore what can happen if you adopt a given solution before having to actually implement it.