A Scalable Architecture for Cross-Modal Semantic Annotation and Retrieval

  • Authors:
  • Manuel Möller;Michael Sintek

  • Affiliations:
  • German Research Center for Artificial Intelligence (DFKI) GmbH, Kaiserslautern, Germany;German Research Center for Artificial Intelligence (DFKI) GmbH, Kaiserslautern, Germany

  • Venue:
  • KI '08 Proceedings of the 31st annual German conference on Advances in Artificial Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Even within constrained domains like medicine there are no truly generic methods for automatic image parsing and annotation. Despite the fact that the precision and sophistication of image understanding methods have improved to cope with the increasing amount and complexity of the data, the improvements have not resulted in more flexible or generic image understanding techniques. Instead, the analysis methods are object specific and modality dependent. Consequently, current image search techniques are still dependent on the manual and subjective association of keywords to images for retrieval. Manually annotating the vast numbers of images which are generated and archived in the medical practice is not an option.