A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
`` Direct Search'' Solution of Numerical and Statistical Problems
Journal of the ACM (JACM)
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine learning in automated text categorization
ACM Computing Surveys (CSUR)
AI Game Programming Wisdom
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Min-max Cut Algorithm for Graph Partitioning and Data Clustering
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
IEEE Transactions on Pattern Analysis and Machine Intelligence
High-order Markov kernels for intrusion detection
Neurocomputing
Spectral kernel learning for semi-supervised classification
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Spectral kernels for classification
DaWaK'05 Proceedings of the 7th international conference on Data Warehousing and Knowledge Discovery
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
Graph-Based Semi-Supervised Learning and Spectral Kernel Design
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Nonlinear classification has been a non-trivial task in machine learning for the past decades. In recent years, kernel machines have successfully generalized the inner-product based linear classifiers to nonlinear ones by transforming data into some high or infinite dimensional feature space. However, due to their implicit space transformation and unobservable latent feature space, it is hard to have an intuitive understanding of their working mechanism. In this paper, we propose a comprehensible framework for nonlinear classifier design, called Manifold Mapping Machine (M^3). M^3 can generalize any linear classifier to nonlinear by transforming data into some low-dimensional feature space explicitly. To demonstrate the effectiveness of M^3 framework, we further present an algorithmic implementation of M^3 named Supervised Spectral Space Classifier (S^3C). Compared with the kernel classifiers, S^3C can achieve similar or even better data separation by mapping data into the low-dimensional spectral space, allowing both of its mapped data and new feature space to be examined directly. Moreover, with the discriminative information integrated into the spectral space transformation, the classification performance of S^3C is more robust than that of the kernel classifiers. Experimental results show that S^3C is superior to other state-of-the-art nonlinear classifiers on both synthetic and real-world data sets.