The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
Approximate clustering via core-sets
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
A parallel mixture of SVMs for very large scale problems
Neural Computation
Training v-support vector regression: theory and algorithms
Neural Computation
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
Neural Computation
Multiclass core vector machine
Proceedings of the 24th international conference on Machine learning
Simpler core vector machines with enclosing balls
Proceedings of the 24th international conference on Machine learning
Feature Selection for Nonlinear Kernel Support Vector Machines
ICDMW '07 Proceedings of the Seventh IEEE International Conference on Data Mining Workshops
Computational Geometry: Theory and Applications
A Small Sphere and Large Margin Approach for Novelty Detection Using Training Data with Outliers
IEEE Transactions on Pattern Analysis and Machine Intelligence
From minimum enclosing ball to fast fuzzy inference system training on large datasets
IEEE Transactions on Fuzzy Systems
Building sparse multiple-kernel SVM classifiers
IEEE Transactions on Neural Networks
Generalized Core Vector Machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n^2)(or even up to O(n^3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the Maximum Vector-Angular Margin Classifier (MAMC) is first proposed based on the Vector-Angular Margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin @r, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e. it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a Maximum Vector-Angular Margin Core Vector Machine (MAMCVM) by connecting the Core Vector Machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods.