A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Combining support vector and mathematical programming methods for classification
Advances in kernel methods
Linear Programming Boosting via Column Generation
Machine Learning
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
Dimensionality reduction via sparse support vector machines
The Journal of Machine Learning Research
Sparseness of support vector machines
The Journal of Machine Learning Research
A Feature Selection Newton Method for Support Vector Machine Classification
Computational Optimization and Applications
Fast SVM Training Algorithm with Decomposition on Very Large Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Selected Topics in Column Generation
Operations Research
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
A general soft method for learning SVM classifiers with L1-norm penalty
Pattern Recognition
On the sparseness of 1-norm support vector machines
Neural Networks
Density-induced margin support vector machines
Pattern Recognition
Hidden space principal component analysis
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Hidden space support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a fast algorithm called Column Generation Newton (CGN) for kernel 1-norm support vector machines (SVMs). CGN combines the Column Generation (CG) algorithm and the Newton Linear Programming SVM (NLPSVM) method. NLPSVM was proposed for solving 1-norm SVM, and CG is frequently used in large-scale integer and linear programming algorithms. In each iteration of the kernel 1-norm SVM, NLPSVM has a time complexity of O(@?^3), where @? is the sample number, and CG has a time complexity between O(@?^3) and O(n^'^3), where n' is the number of columns of the coefficient matrix in the subproblem. CGN uses CG to generate a sequence of subproblems containing only active constraints and then NLPSVM to solve each subproblem. Since the subproblem in each iteration only consists of n' unbound constraints, CGN thus has a time complexity of O(n^'^3), which is smaller than that of NLPSVM and CG. Also, CGN is faster than CG when the solution to 1-norm SVM is sparse. A theorem is given to show a finite step convergence of CGN. Experimental results on the Ringnorm and UCI data sets demonstrate the efficiency of CGN to solve the kernel 1-norm SVM.