Overview of the ImageCLEFmed 2007 Medical Retrieval and Medical Annotation Tasks

  • Authors:
  • Henning Müller;Thomas Deselaers;Thomas M. Deserno;Jayashree Kalpathy---Cramer;Eugene Kim;William Hersh

  • Affiliations:
  • Medical Informatics, University and Hospitals of Geneva, Switzerland and Business Information Systems, University of Applied Sciences, Sierre, Switzerland;Computer Science Dep., RWTH Aachen University, Germany;Dept. of Medical Informatics, RWTH Aachen University, Germany;Oregon Health and Science University (OHSU), Portland, USA;Oregon Health and Science University (OHSU), Portland, USA;Oregon Health and Science University (OHSU), Portland, USA

  • Venue:
  • Advances in Multilingual and Multimodal Information Retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the medical image retrieval and medical image annotation tasks of ImageCLEF 2007. Separate sections describe each of the two tasks, with the participation and an evaluation of major findings from the results of each given. A total of 13 groups participated in the medical retrieval task and 10 in the medical annotation task.The medical retrieval task added two new data sets for a total of over 66'000 images. Topics were derived from a log file of the Pubmed biomedical literature search system, creating realistic information needs with a clear user model.The medical annotation task was in 2007 organized in a new format as a hierarchical classification had to be performed and classification could be stopped at any hierarchy level. This required algorithms to change significantly and to integrate a confidence level into their decisions to be able to judge where to stop classification to avoid making mistakes in the hierarchy. Scoring took into account errors and unclassified parts.