Scene Retrieval of Natural Images

  • Authors:
  • J. F. Serrano;J. H. Sossa;C. Avilés;R. Barrón;G. Olague;J. Villegas

  • Affiliations:
  • Centro de Investigación en Computación-Instituto Politécnico Nacional (CIC- IPN) UPLM --- Zacatenco, Lindavista, México C.P. 07738;Centro de Investigación en Computación-Instituto Politécnico Nacional (CIC- IPN) UPLM --- Zacatenco, Lindavista, México C.P. 07738;Departamento de Electrónica, Universidad Autónoma Metropolitana-Azcapotzalco, México C.P. 02200;Centro de Investigación en Computación-Instituto Politécnico Nacional (CIC- IPN) UPLM --- Zacatenco, Lindavista, México C.P. 07738;Centro de Investigación Científica y de Educación Superior de Ensenada, BC, Zona Playitas, México C.P. 22860;Centro de Investigación en Computación-Instituto Politécnico Nacional (CIC- IPN) UPLM --- Zacatenco, Lindavista, México C.P. 07738

  • Venue:
  • CIARP '09 Proceedings of the 14th Iberoamerican Conference on Pattern Recognition: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Feature extraction is a key issue in Content Based Image Retrieval (CBIR). In the past, a number of describing features have been proposed in literature for this goal. In this work a feature extraction and classification methodology for the retrieval of natural images is described. The proposal combines fixed and random extracted points for feature extraction. The describing features are the mean, the standard deviation and the homogeneity (form the co-occurrence) of a sub-image extracted from the three channels: H, S and I. A K -MEANS algorithm and a 1-NN classifier are used to build an indexed database of 300 images. One of the advantages of the proposal is that we do not need to manually label the images for their retrieval. After performing our experimental results, we have observed that in average image retrieval using images not belonging to the training set is of 80.71% of accuracy. A comparison with two similar works is also presented. We show that our proposal performs better in both cases.