Letter: Learning similarity for semantic images classification

  • Authors:
  • Dianhui Wang;Joon Shik Lim;Myung-Mook Han;Byung-Wook Lee

  • Affiliations:
  • Department of Computer Science and Computer Engineering, La Trobe University, Melbourne, VIC 3086, Australia and Software College, Kyungwon University, Seongnam, Gyeonggi-Do, 405-760, South Korea;Software College, Kyungwon University, Seongnam, Gyeonggi-Do, 405-760, South Korea;Software College, Kyungwon University, Seongnam, Gyeonggi-Do, 405-760, South Korea;Software College, Kyungwon University, Seongnam, Gyeonggi-Do, 405-760, South Korea

  • Venue:
  • Neurocomputing
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

While people compare images using semantic concepts, computers compare images using low-level visual features that sometimes have little to do with these semantics. To reduce the gap between the high-level semantics of visual objects and the low-level features extracted from them, in this paper we develop a framework of learning similarity (LS) using neural networks for semantic image classification, where a LS-based k-nearest neighbors (k-NN"L) classifier is employed to assign a label to an unknown image according to the majority of k most similar features. Experimental results on an image database show that the k-NN"L classifier outperforms the Euclidean distance-based k-NN (k-NN"E) classifier and back-propagation network classifiers (BPNC).