Dimensionality reduction for heterogeneous dataset in rushes editing
Pattern Recognition
Binary sparse nonnegative matrix factorization
IEEE Transactions on Circuits and Systems for Video Technology
A new and fast implementation for null space based linear discriminant analysis
Pattern Recognition
Generalized discriminant analysis: a matrix exponential approach
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Discrminative geometry preserving projections
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Evolutionary cross-domain discriminative hessian eigenmaps
IEEE Transactions on Image Processing
Biologically inspired feature manifold for scene classification
IEEE Transactions on Image Processing
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Face Recognition Using Kernel UDP
Neural Processing Letters
Multi-scale gist feature manifold for building recognition
Neurocomputing
Generalized mean for feature extraction in one-class classification problems
Pattern Recognition
Hi-index | 0.00 |
Subspace selection is a powerful tool in data mining. An important subspace method is the Fisher Rao linear discriminant analysis (LDA), which has been successfully applied in many fields such as biometrics, bioinformatics, and multimedia retrieval. However, LDA has a critical drawback: the projection to a subspace tends to merge those classes that are close together in the original feature space. If the separated classes are sampled from Gaussian distributions, all with identical covariance matrices, then LDA maximizes the mean value of the Kullback Leibler (KL) divergences between the different classes. We generalize this point of view to obtain a framework for choosing a subspace by 1) generalizing the KL divergence to the Bregman divergence and 2) generalizing the arithmetic mean to a general mean. The framework is named the general averaged divergence analysis (GADA). Under this GADA framework, a geometric mean divergence analysis (GMDA) method based on the geometric mean is studied. A large number of experiments based on synthetic data show that our method significantly outperforms LDA and several representative LDA extensions.