An Empirical Evaluation of Common Vector Based Classification Methods and Some Extensions
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Scaling Up a Metric Learning Algorithm for Image Recognition and Representation
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II
A Random Extension for Discriminative Dimensionality Reduction and Metric Learning
IbPRIA '09 Proceedings of the 4th Iberian Conference on Pattern Recognition and Image Analysis
Fusing Gabor and LBP feature sets for kernel-based face recognition
AMFG'07 Proceedings of the 3rd international conference on Analysis and modeling of faces and gestures
Common vector approach and its combination with GMM for text-independent speaker recognition
Expert Systems with Applications: An International Journal
Reducing features from pejibaye palm DNA marker for an efficient classification
NOLISP'09 Proceedings of the 2009 international conference on Advances in Nonlinear Speech Processing
Feature Extraction Using a Complete Kernel Extension of Supervised Graph Embedding
Neural Processing Letters
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Hi-index | 0.00 |
In some pattern recognition tasks, the dimension of the sample space is larger than the number of samples in the training set. This is known as the "small sample size problem". Linear discriminant analysis (LDA) techniques cannot be applied directly to the small sample size case. The small sample size problem is also encountered when kernel approaches are used for recognition. In this paper, we attempt to answer the question of "How should one choose the optimal projection vectors for feature extraction in the small sample size case?" Based on our findings, we propose a new method called the kernel discriminative common vector method. In this method, we first nonlinearly map the original input space to an implicit higher dimensional feature space, in which the data are hoped to be linearly separable. Then, the optimal projection vectors are computed in this transformed space. The proposed method yields an optimal solution for maximizing a modified Fisher's linear discriminant criterion, discussed in the paper. Thus, under certain conditions, a 100% recognition rate is guaranteed for the training set samples. Experiments on test data also show that, in many situations, the generalization performance of the proposed method compares favorably with other kernel approaches