Localized content based image retrieval

  • Authors:
  • Rouhollah Rahmani;Sally A. Goldman;Hui Zhang;John Krettek;Jason E. Fritts

  • Affiliations:
  • Washington University, St. Louis, MO;Washington University, St. Louis, MO;Washington University, St. Louis, MO;Washington University, St. Louis, MO;Washington University, St. Louis, MO

  • Venue:
  • Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Classic Content-Based Image Retrieval (CBIR) takes a single non-annotated query image, and retrieves similar images from an image repository. Such a search must rely upon a holistic (or global) view of the image. Yet often the desired content of an image is not holistic, but is localized. Specifically, we define Localized Content-Based Image Retrieval as a CBIR task where the user is only interested in a portion of the image, and the rest of the image is irrelevant. Many classic CBIR systems use relevance feedback to obtain images labeled as desirable or not desirable. Yet, these labeled images are typically used only to re-weight the features used within a global similarity measure. In this paper we present a localized CBIR system, acciop, that uses labeled images in conjunction with a multiple-instance learning algorithm to first identify the desired object and re-weight the features, and then to rank images in the database using a similarity measure that is based upon individual regions within the image. We evaluate our system using a five-category natural scenes image repository, and benchmark data set, SIVAL, that we have constructed with 25 object categories.