HPCC '08 Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
ImageCLEF: Experimental Evaluation in Visual Information Retrieval
ImageCLEF: Experimental Evaluation in Visual Information Retrieval
The scholarly impact of TRECVid (2003–2009)
Journal of the American Society for Information Science and Technology
Assessing the scholarly impact of imageCLEF
CLEF'11 Proceedings of the Second international conference on Multilingual and multimodal information access evaluation
The 2005 PASCAL visual object classes challenge
MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
Hi-index | 0.00 |
Benchmarks have shown to be an important tool to advance science in the fields of information analysis and retrieval. Problems of running benchmarks include obtaining large amounts of data, annotating it and then distributing it to the participants of a benchmark. Distribution of the data to participants is currently mostly done via data download that can take hours for large data sets and in countries with slow Internet connections even days. Sending physical hard disks was also used for distributing very large scale data sets (for example by TRECvid) but also this becomes infeasible if the data sets reach sizes of 5---10 TB. With cloud computing it is possible to make very large data sets available in a central place with limited costs. Instead of distributing the data to the participants, the participants can compute their algorithms on virtual machines of the cloud providers. This text presents reflections and ideas of a concrete project on using cloud---based benchmarking paradigms for medical image analysis and retrieval. It is planned to run two evaluation campaigns in 2013 and 2014 using the proposed technology.