Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
The nature of statistical learning theory
The nature of statistical learning theory
Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning a Mahalanobis distance metric for data clustering and classification
Pattern Recognition
Orthogonal Laplacianfaces for Face Recognition
IEEE Transactions on Image Processing
Minimum Class Variance Support Vector Machines
IEEE Transactions on Image Processing
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Twin Mahalanobis distance-based support vector machines for pattern recognition
Information Sciences: an International Journal
On minimum distribution discrepancy support vector machine for domain adaptation
Pattern Recognition
Journal of Medical Systems
Granular support vector machine based on mixed measure
Neurocomputing
Generalized locality preserving Maxi-Min Margin Machine
Neural Networks
Hi-index | 0.01 |
In this paper, a so-called minimum class locality preserving variance support machine (MCLPV_SVM) algorithm is presented by introducing the basic idea of the locality preserving projections (LPP), which can be seen as a modified class of support machine (SVM) and/or minimum class variance support machine (MCVSVM). MCLPV_SVM, in contrast to SVM and MCVSVM, takes the intrinsic manifold structure of the data space into full consideration and inherits the characteristics of SVM and MCVSVM. We discuss in the paper the linear case, the small sample size case and the nonlinear case of the MCLPV_SVM. Similar to MCVSVM, the MCLPV_SVM optimization problem in the small sample size case is solved by using dimensionality reduction through principal component analysis (PCA) and one in the nonlinear case is transformed into an equivalent linear MCLPV_SVM problem under kernel PCA (KPCA). Experimental results on real datasets indicate the effectiveness of the MCLPV_SVM by comparing it with SVM and MCVSVM.