Blind Men and Elephants: Six Approaches to TREC data
Information Retrieval
Query performance analyser -: a web-based tool for IR research and instruction
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations
VL '96 Proceedings of the 1996 IEEE Symposium on Visual Languages
The structure of the information visualization design space
INFOVIS '97 Proceedings of the 1997 IEEE Symposium on Information Visualization (InfoVis '97)
A rank-by-feature framework for interactive exploration of multidimensional data
Information Visualization
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Overview of the Reliable Information Access Workshop
Information Retrieval
Weighted Rank Correlation in Information Retrieval Evaluation
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
ACM Transactions on Computer-Human Interaction (TOCHI)
Constant density displays using diversity sampling
INFOVIS'03 Proceedings of the Ninth annual IEEE conference on Information visualization
DIRECTions: design and specification of an IR evaluation infrastructure
CLEF'12 Proceedings of the Third international conference on Information Access Evaluation: multilinguality, multimodality, and visual analytics
Hi-index | 0.00 |
Evaluation has a crucial role in Information Retrieval (IR) since it allows for identifying possible points of failure of an IR approach, thus addressing them to improve its effectiveness. Developing tools to support researchers and analysts when analyzing results and investigating strategies to improve IR system performance can help make the analysis easier and more effective. In this paper we discuss a Visual Analytics-based approach to support the analyst when deciding whether or not to investigate re-ranking to improve the system effectiveness measured after a retrieval run. Our approach is based on effectiveness measures that exploit graded relevance judgements and it provides both a principled and intuitive way to support analysis. A prototype is described and exploited to discuss some case studies based on TREC data.