Theory and Implementation on Automatic Adaptive Metadata Generation for Image Retrieval

  • Authors:
  • Hideyasu Sasaki;Yasushi Kiyoki

  • Affiliations:
  • Ritsumeikan University, 1-1-1, Noji-higashi, Kusatsu, Shiga, 525-8577 Japan, hsasaki@alumni.uchicago.edu;Keio University, 5322 Endo Fujisawa, Kanagawa, 252-8520 Japan, kiyoki@mdbl.sfc.keio.ac.jp

  • Venue:
  • Proceedings of the 2006 conference on Information Modelling and Knowledge Bases XVII
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present the detailed theory and implementation on an automatic adaptive metadata generation system using content analysis of sample images with a variety of experimental results. Instead of costly human-created metadata, our method ranks sample images by distance computation on their structural similarity to query images, and automatically generates metadata as textual labels that represent geometric structural properties of the most similar sample images to the query images. First, our system screens out improper query images for metadata generation by using CBIR that computes structural similarity between sample images and query images. We have realized automatic selection of proper threshold-values in the screening module. Second, the system generates metadata by selecting sample indexes attached to the sample images that are structurally similar to query images. Third, the system detects improper metadata and re-generates proper metadata by identifying wrongly selected metadata. Our system has improved metadata generation by 23.5% on recall ratio and 37% on fallout ratio rather than just using the results of content analysis even with more practical experimental figures. Our system has its extensibility to the various types of specific object domains with inclusion of computer vision techniques.