Learning large margin classifiers locally and globally
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Locality sensitive discriminant analysis
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Discriminative orthogonal neighborhood-preserving projections for classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
On minimum class locality preserving variance support vector machine
Pattern Recognition
A new local-global approach for classification
Neural Networks
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Locally Consistent Concept Factorization for Document Clustering
IEEE Transactions on Knowledge and Data Engineering
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Maxi–Min Margin Machine: Learning Large Margin Classifiers Locally and Globally
IEEE Transactions on Neural Networks
A Note on the Bias in SVMs for Multiclassification
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Research on large margin classifiers from the ''local'' and ''global'' view has become an active topic in machine learning and pattern recognition. Inspired from the typical local and global learning machine Maxi-Min Margin Machine (M^4) and the idea of the Locality Preserving Projections (LPP), we propose a novel large margin classifier, the Generalized Locality Preserving Maxi-Min Margin Machine (GLPM), where the within-class matrices are constructed using the labeled training points in a supervised way, and then used to build the classifier. The within-class matrices of GLPM preserve the intra-class manifold in the training sets, as well as the covariance matrices which indicate the global projection direction in the M^4 model. Moreover, the connections among GLPM, M^4 and LFDA are theoretically analyzed, and we show that GLPM can be considered as a generalized M^4 machine. The GLPM is also more robust since it requires no assumption on data distribution while Gaussian data distribution is assumed in the M^4 machine. Experiments on data sets from the machine learning repository demonstrate its advantage over M^4 in both local and global learning performance.