A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminant Analysis of Principal Components for Face Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
2D and 3D face recognition: A survey
Pattern Recognition Letters
Application of the Karhunen-Loève Expansion to Feature Selection and Ordering
IEEE Transactions on Computers
Journal of Cognitive Neuroscience
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Linear dimensionality reduction using relevance weighted LDA
Pattern Recognition
The Kernel Common Vector Method: A Novel Nonlinear Subspace Classifier for Pattern Recognition
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The Common Vector (CV) method is a linear subspace classifier for datasets, such as those arising in image and word recognition. In this approach, a class subspace is modeled from the common features of all samples in the corresponding class. Since the class subspace are modeled as a separate subspace for each class in feature domain, there is overlapping between these subspaces and also loss of information in the common vector of a class. This reduces the recognition performance. In multi-class problems, within-class and between-class scatter should be considered in classification criterion. Since the within class scatter $S_{W}^{} $ and between class scatter $S_{B}^{^{} } $ followed in Discriminative Common Vector method (DCV) are based on the assumption that all classes have similar covariance structures, these class scatters cannot be followed in CV method. Generally a linear subspace classifier fails to extract the non-linear features of samples which describe the complexity of face image due to illumination, facial expressions and pose variations. In this paper, we propose a new method called “Improved kernel common vector method” which solves the above problems by means of its appealing properties. First the inclusion of boosting parameters in the proposed between-class and within-class scatters consider the neighboring class subspaces and also consider a sample of a class with samples of other classes. This increases the recognition performance. Second the obtained common vector by using the above proposed scatter spaces has more significant discriminative information which also increases the recognition performance. Third like all kernel methods, it handles non-linearity in a disciplined manner which extracts the non-linear features of samples representing the complexity of face images. Experimental results on Yale B face database demonstrate the promising performance of the proposed methodology.