Using statistical testing in the evaluation of retrieval experiments
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
The Cranfield tests on index language devices
Readings in information retrieval
The NRRC reliable information access (RIA) workshop
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
The Lowell database research self-assessment
Communications of the ACM - Adaptive complex enterprises
Scientific data of an evaluation campaign: do we properly deal with them?
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Direct: applying the DIKW hierarchy to large-scale evaluation campaigns
Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries
Design of a Digital Library System for Large-Scale Evaluation Campaigns
ECDL '08 Proceedings of the 12th European conference on Research and Advanced Technology for Digital Libraries
CLEF 2009 ad hoc track overview: TEL and Persian tasks
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Scientific data of an evaluation campaign: do we properly deal with them?
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Hi-index | 0.00 |
Information Retrieval system evaluation campaigns produce valuable scientific data, which should be preserved carefully so that they can be available for further studies. A complete record should be maintained of all analyses and interpretations in order to ensure that they are reusable in attempts to replicate particular results or in new research and so that they can be referred to or cited at any time. In this paper, we describe the data curation approach for the scientific data produced by evaluation campaigns. The medium/long-term aim is to create a large-scale Digital Library System (DLS) of scientific data which supports services for the creation, interpretation and use of multidisciplinary and multilingual digital content.