The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Prior knowledge in support vector kernels
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Pattern Classification: Neuro-Fuzzy Methods and Their Comparison
Pattern Classification: Neuro-Fuzzy Methods and Their Comparison
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Simple Decomposition Method for Support Vector Machines
Machine Learning
Convergence of a Generalized SMO Algorithm for SVM Classifier Design
Machine Learning
Polynomial-Time Decomposition Algorithms for Support Vector Machines
Machine Learning
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Support Vectors Selection by Linear Programming
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 5 - Volume 5
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Maximum-Gain Working Set Selection for SVMs
The Journal of Machine Learning Research
Fast training of linear programming support vector machines using decomposition techniques
ANNPR'06 Proceedings of the Second international conference on Artificial Neural Networks in Pattern Recognition
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Global Convergence of Decomposition Learning Methods for Support Vector Machines
IEEE Transactions on Neural Networks
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
On the sparseness of 1-norm support vector machines
Neural Networks
Improved parameter tuning algorithms for fuzzy classifiers
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
A fast algorithm for kernel 1-norm support vector machines
Knowledge-Based Systems
Hi-index | 0.01 |
In this paper, we propose three decomposition techniques for linear programming (LP) problems: (1) Method 1, in which we decompose the variables into the working set and the fixed set, but we do not decompose the constraints, (2) Method 2, in which we decompose only the constraints and (3) Method 3, in which we decompose both the variables and the constraints into two. By Method 1, the value of the objective function is proved to be non-decreasing (non-increasing) for the maximization (minimization) problem and by Method 2, the value is non-increasing (non-decreasing) for the maximization (minimization) problem. Thus, by Method 3, which is a combination of Methods 1 and 2, the value of the objective function is not guaranteed to be monotonic and there is a possibility of infinite loops. We prove that infinite loops are resolved if the variables in an infinite loop are not released from the working set and Method 3 converges in finite steps. We apply Methods 1 and 3 to LP support vector machines (SVMs) and discuss a more efficient method of accelerating training by detecting the increase in the number of violations and restoring variables in the working set that are released at the previous iteration step. By computer experiments for microarray data with huge input variables and a small number of constraints, we demonstrate the effectiveness of Method 1 for training the primal LP SVM with linear kernels. We also demonstrate the effectiveness of Method 3 over Method 1 for the nonlinear LP SVMs.