An Analysis of Linear Subspace Approaches for Computer Vision and Pattern Recognition

  • Authors:
  • Pei Chen;David Suter

  • Affiliations:
  • ARC Centre for Perceptive and Intelligent Machines in Complex Environments, Department of Electrical and Computer Systems Engineering, Monash University, Australia 3800;ARC Centre for Perceptive and Intelligent Machines in Complex Environments, Department of Electrical and Computer Systems Engineering, Monash University, Australia 3800

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Linear subspace analysis (LSA) has become rather ubiquitous in a wide range of problems arising in pattern recognition and computer vision. The essence of these approaches is that certain structures are intrinsically (or approximately) low dimensional: for example, the factorization approach to the problem of structure from motion (SFM) and principal component analysis (PCA) based approach to face recognition. In LSA, the singular value decomposition (SVD) is usually the basic mathematical tool. However, analysis of the performance, in the presence of noise, has been lacking. We present such an analysis here. First, the "denoising capacity" of the SVD is analysed. Specifically, given a rank-r matrix, corrupted by noise--how much noise remains in the rank-r projected version of that corrupted matrix? Second, we study the "learning capacity" of the LSA-based recognition system in a noise-corrupted environment. Specifically, LSA systems that attempt to capture a data class as belonging to a rank-r column space will be affected by noise in both the training samples (measurement noise will mean the learning samples will not produce the "true subspace") and the test sample (which will also have measurement noise on top of the ideal clean sample belonging to the "true subspace"). These results should help one to predict aspects of performance and to design more optimal systems in computer vision, particularly in tasks, such as SFM and face recognition. Our analysis agrees with certain observed phenomenon, and these observations, together with our simulations, verify the correctness of our theory.