Synobins: an intermediate level towards annotation and semantic retrieval

  • Authors:
  • Daniela Stan Raicu;Ishwar K. Sethi

  • Affiliations:
  • Intelligent Multimedia Processing Laboratory, School of Computer Science, Telecommunications, and Information Systems, DePaul University, Chicago, IL;IIE Laboratory, Department of Computer Science & Engineering, Oakland University, Rochester, MI

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

To reason about the meaning of an image, useful information should be provided with that image; however, images often contain little to no textual information about the objects they are depicting, which is the precise reason why there is a need for CBIR systems that exploit only the correlations present in the raw pixel data. In this paper, we proposed a new type of image feature, which consists of patterns of colors and intensities that capture the latent associations among images and primitive features in such a way that the noise and redundancy are eliminated. We introduced the synobin, a new term for content-based image retrieval literature, which is the equivalent of a synonym word from text retrieval, to name the bin that is synonymous with other bins of a color feature, in the sense that they are similarly used across the image database. In a formal definition, a group of synobins is given by the most important bins participating in forming of a useful pattern, that is, the bins having the highest coefficients in the linear combination defining that pattern. Incorporating our feature model into a CBIR system moves the research in image retrieval beyond simple matching of images based on their primitive features and creates a ground for learning image semantics from visual content. A system developed using our proposed feature model will have the capability of learning associations not only between semantic concepts and images, but also between semantic concepts and patterns. We evaluated the performance of our system based on the retrieval accuracy and on the perceptual similarity order among retrieved images. When compared to standard image retrieval methods, our preliminary results show that even if the feature space was reduced to only 3%-5% of the initial space, the accuracy and perceptual similarity for our system remain the same or better depending on the category of images.