Linear discriminant analysis using rotational invariant L1 norm

  • Authors:
  • Xi Li;Weiming Hu;Hanzi Wang;Zhongfei Zhang

  • Affiliations:
  • National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China;University of Adelaide, Australia;State University of New York, Binghamton, NY 13902, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Linear discriminant analysis (LDA) is a well-known scheme for supervised subspace learning. It has been widely used in the applications of computer vision and pattern recognition. However, an intrinsic limitation of LDA is the sensitivity to the presence of outliers, due to using the Frobenius norm to measure the inter-class and intra-class distances. In this paper, we propose a novel rotational invariant L"1 norm (i.e., R"1 norm) based discriminant criterion (referred to as DCL"1), which better characterizes the intra-class compactness and the inter-class separability by using the rotational invariant L"1 norm instead of the Frobenius norm. Based on the DCL"1, three subspace learning algorithms (i.e., 1DL"1, 2DL"1, and TDL"1) are developed for vector-based, matrix-based, and tensor-based representations of data, respectively. They are capable of reducing the influence of outliers substantially, resulting in a robust classification. Theoretical analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed DCL"1 and its algorithms.