Pose independent object classification from small number of training samples based on kernel principal component analysis of local parts

  • Authors:
  • Kazuhiro Hotta

  • Affiliations:
  • The University of Electro-Communications, Department of Information and Communication Engineering, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a pose independent classification method from a small number of training samples based on kernel principal component analysis (KPCA) of local parts. Pose changes induce large non-linear variation in feature space of global features. Therefore, conventional methods require multiple poses in training. However, the influence of pose changes in local features is less than that in global features because the global configuration is much influenced. The difference of distributions of local parts cropped from different poses is not so large. If the distribution of local parts cropped from typical poses is modeled, it is robust to unknown poses. Since the distribution of local parts is non-linear, KPCA is used to model the feature space specialized of each class. Class-featuring information compression (CLAFIC) is used to compute the similarity with subspace. In CLAFIC of KPCA, the similarity with certain class is computed by the weighted sum of the similarities with training local parts. Since many local parts are cropped from the input, voting, summation, and median rules are used to combine the similarities of all local parts. Robustness to pose variation is evaluated using the face images of five poses of 300 subjects. Although only frontal and profile views are used in training, the recognition rates to unknown poses are more than 90%. Effectiveness is shown by the comparison with linear PCA of local parts and global features based methods. In addition, the proposed method can be applied easily to the recognition problem of various kinds of 3D objects because it does not require many poses in training or preprocessing such as accurate correspondence between images. The robustness to pose variation and ease of applications are demonstrated using COIL 100 database.