How can we speed up matrix multiplication?
SIAM Review
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces
The Journal of Machine Learning Research
Optimization algorithms exploiting unitary constraints
IEEE Transactions on Signal Processing
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a novel dimension reduction algorithm for kernel based classification. In the feature space, the proposed algorithm maximizes the ratio of the squared between-class distance and the sum of the within-class variances of the training samples for a given reduced dimension. This algorithm has lower complexity than the recently reported kernel dimension reduction (KDR) for supervised learning. We conducted several simulations with large training datasets, which demonstrate that the proposed algorithm has similar performance or is marginally better compared with KDR whilst having the advantage of computational efficiency. Further, we applied the proposed dimension reduction algorithm to face recognition in which the number of training samples is very small. This proposed face recognition approach based on the new algorithm outperforms the eigenface approach based on the principal component analysis (PCA), when the training data is complete, that is, representative of the whole dataset.