Separable linear discriminant analysis

  • Authors:
  • Jianhua Zhao;Philip L. H. Yu;Lei Shi;Shulan Li

  • Affiliations:
  • School of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming, 650221, China;Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong;School of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming, 650221, China;School of Accountancy, Yunnan University of Finance and Economics, Kunming, 650221, China

  • Venue:
  • Computational Statistics & Data Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.03

Visualization

Abstract

Linear discriminant analysis (LDA) is a popular technique for supervised dimension reduction. Due to the curse of dimensionality usually suffered by LDA when applied to 2D data, several two-dimensional LDA (2DLDA) methods have been proposed in recent years. Among which, the Y2DLDA method, introduced by Ye et al. (2005), is an important development. The idea is to utilize the underlying 2D data structure to seek for an optimal bilinear transformation. However, it is found that the proposed algorithm does not guarantee convergence. In this paper, we show that the utilization of a bilinear transformation for 2D data is equivalent to modeling the covariance matrix of 2D data as separable covariance matrix. Based on this result, we propose a novel 2DLDA method called separable LDA (SLDA). The main contributions of SLDA include (1) it provides interesting theoretical relationships between LDA and some 2DLDA methods; (2) SLDA provides a building block for mixture extension; (3) unlike Y2DLDA, a neatly analytical solution can be obtained as that in LDA. Empirical results show that our proposed SLDA achieves better recognition performance than Y2DLDA while being computationally much more efficient.