Impediments to general purpose Content Based Image search

  • Authors:
  • Melanie A. Veltman;Michael Wirth;JingBo Ni

  • Affiliations:
  • University of Guelph, Guelph, ON, Canada;University of Guelph, Guelph, ON, Canada;University of Guelph, Guelph, ON, Canada

  • Venue:
  • C3S2E '09 Proceedings of the 2nd Canadian Conference on Computer Science and Software Engineering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Challenges faced by prevailing text metadata paradigms for online image search have inspired overwhelming research in Content Based Image Retrieval (CBIR). A multitude of approaches have been introduced within the literature, yet relatively few image search engines have been made publicly available on the web. Aside from challenges facing the user, such as describing a visual query using keywords, or finding an appropriate example image to initiate a visual search, all systems must inevitably grapple with the sensory and semantic gaps [Smeulders et al. 2000], which essentially represent a loss of information in the abstraction process. In this work, we challenge commonly suggested approaches to improving CBIR and illustrate drawbacks of relying on textual data, as well as visual data, in general CBIR search. We provide cogent examples using online visual search engines Behold™, Tiltomo Beta, Pixilimar, and Riya™ Beta. These examples demonstrate the effect of semantic ambiguities in natural language, which extend to search terms and text tags.