Baseline results for the ImageCLEF 2006 medical automatic annotation task

  • Authors:
  • Mark O. Güld;Christian Thies;Benedikt Fischer;Thomas M. Deserno

  • Affiliations:
  • Department of Medical Informatics, RWTH Aachen, Aachen, Germany;Department of Medical Informatics, RWTH Aachen, Aachen, Germany;Department of Medical Informatics, RWTH Aachen, Aachen, Germany;Department of Medical Informatics, RWTH Aachen, Aachen, Germany

  • Venue:
  • CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ImageCLEF 2006 medical automatic annotation task encompasses 11,000 images from 116 categories, compared to 57 categories for 10,000 images of the similar task in 2005. As a baseline for comparison, a run using the same classifiers with the identical parameterization as in 2005 is submitted. In addition, the parameterization of the classifier was optimized according to the 9,000/1,000 split of the 2006 training data. In particular, texture-based classifiers are combined in parallel with classifiers, which use spatial intensity information to model common variabilities among medical images. However, all individual classifiers are based on global features, i.e. one feature vector describes the entire image. The parameterization from 2005 yields an error rate of 21.7%, which ranks 13th among the 28 submissions. The optimized classifier yields 21.4% error rate (rank 12), which is insignificantly better.