A fast kernel-based nonlinear discriminant analysis for multi-class problems

  • Authors:
  • Yong Xu;David Zhang;Zhong Jin;Miao Li;Jing-Yu Yang

  • Affiliations:
  • Bio-Computing Research Center and Shenzhen graduate school, Harbin Institute of Technology, Shenzhen, China and Department of Computer Science & Technology, Nanjing University of Science & Technol ...;The Biometrics Research Center and Department of Computing, Hong Kong Polytechnic University, Kowloon, Hong Kong;Department of Computer Science & Technology, Nanjing University of Science & Technology, Nanjing, China;Bio-Computing Research Center and Shenzhen graduate school, Harbin Institute of Technology, Shenzhen, China;Department of Computer Science & Technology, Nanjing University of Science & Technology, Nanjing, China

  • Venue:
  • Pattern Recognition
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

Nonlinear discriminant analysis may be transformed into the form of kernel-based discriminant analysis. Thus, the corresponding discriminant direction can be solved by linear equations. From the view of feature space, the nonlinear discriminant analysis is still a linear method, and it is provable that in feature space the method is equivalent to Fisher discriminant analysis. We consider that one linear combination of parts of training samples, called ''significant nodes'', can replace the total training samples to express the corresponding discriminant vector in feature space to some extent. In this paper, an efficient algorithm is proposed to determine ''significant nodes'' one by one. The principle of determining ''significant nodes'' is simple and reasonable, and the consequent algorithm can be carried out with acceptable computation cost. Depending on the kernel functions between test samples and all ''significant nodes'', classification can be implemented. The proposed method is called fast kernel-based nonlinear method (FKNM). It is noticeable that the number of ''significant nodes'' may be much smaller than that of the total training samples. As a result, for two-class classification problems, the FKNM will be much more efficient than the naive kernel-based nonlinear method (NKNM). The FKNM can be also applied to multi-class via two approaches: one-against-the-rest and one-against-one. Although there is a view that one-against-one is superior to one-against-the-rest in classification efficiency, it seems that for the FKNM one-against-the-rest is more efficient than one-against-one. Experiments on benchmark and real datasets illustrate that, for two-class and multi-class classifications, the FKNM is effective, feasible and much efficient.