eID: a system for exploration of image databases

  • Authors:
  • Daniela Stan;Ishwar K. Sethi

  • Affiliations:
  • Intelligent Information Engineering Laboratory, Department of Computer Science & Engineering, Oakland University, Rochester, MI;Intelligent Information Engineering Laboratory, Department of Computer Science & Engineering, Oakland University, Rochester, MI

  • Venue:
  • Information Processing and Management: an International Journal
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The goal of this paper is to describe an exploration system for large image databases in order to help the user understand the database as a whole, discover hidden relationships, and formulate insights with minimum effort. While the proposed system works with any type of low-level feature representation of images, we describe our system using color information. The system is built in three stages: the feature extraction stage in which images are represented in a way that allows efficient storage and retrieval results closer to the human perception; the second stage consists of building a hierarchy of clusters in which the cluster prototype, as the electronic identification ( e ID) of that cluster, is designed to summarize the cluster in a manner that is suited for quick human comprehension of its components. In a formal definition, an electronic IDentification ( e ID) is the most similar image to the other images from a corresponding cluster; that is, the image in the cluster that maximizes the sum of the squares of the similarity values to the other images of that cluster. Besides summarizing the image database to a certain level of detail, an e ID image will be a way to provide access either to another set of e ID images on a lower level of the hierarchy or to a group of perceptually similar images with itself. As a third stage, the multi-dimensional scaling technique is used to provide us with a tool for the visualization of the database at different levels of details. Moreover, it gives the capability for semi-automatic annotation in the sense that the image database is organized in such a way that perceptual similar images are grouped together to form perceptual contexts. As a result, instead of trying to give all possible meanings to an image, the user will interpret and annotate an image in the context in which that image appears, thus dramatically reducing the time taken to annotate large collection of images.