Blind Men and Elephants: Six Approaches to TREC data
Information Retrieval
Query performance analyser -: a web-based tool for IR research and instruction
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Fundamentals of Database Systems, Fourth Edition
Fundamentals of Database Systems, Fourth Edition
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Challenges in Visual Data Analysis
IV '06 Proceedings of the conference on Information Visualization
Improvements that don't add up: ad-hoc retrieval results since 1998
Proceedings of the 18th ACM conference on Information and knowledge management
Managing the knowledge creation process of large-scale evaluation campaigns
ECDL'09 Proceedings of the 13th European conference on Research and advanced technology for digital libraries
Invited paper: Visualizing search results and document collections using topic maps
Web Semantics: Science, Services and Agents on the World Wide Web
Information Retrieval Evaluation
Information Retrieval Evaluation
To re-rank or to re-query: can visual analytics solve this dilemma?
CLEF'11 Proceedings of the Second international conference on Multilingual and multimodal information access evaluation
Proceedings of the 20th ACM international conference on Information and knowledge management
DIRECT: a system for evaluating information access components of digital libraries
ECDL'05 Proceedings of the 9th European conference on Research and Advanced Technology for Digital Libraries
Scientific data of an evaluation campaign: do we properly deal with them?
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Hi-index | 0.00 |
Information Retrieval (IR) experimental evaluation is an essential part of the research on and development of information access methods and tools. Shared data sets and evaluation scenarios allow for comparing methods and systems, understanding their behaviour, and tracking performances and progress over the time. On the other hand, experimental evaluation is an expensive activity in terms of human effort, time, and costs required to carry it out. Software and hardware infrastructures that support experimental evaluation operation as well as management, enrichment, and exploitation of the produced scientific data provide a key contribution in reducing such effort and costs and carrying out systematic and throughout analysis and comparison of systems and methods, overall acting as enablers of scientific and technical advancement in the field. This paper describes the specification for an Information Retrieval (IR) evaluation infrastructure by conceptually modeling the entities involved in Information Retrieval (IR) experimental evaluation and their relationships and by defining the architecture of the proposed evaluation infrastructure and the APIs for accessing it.