Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Simple Decomposition Method for Support Vector Machines
Machine Learning
Convergence of a Generalized SMO Algorithm for SVM Classifier Design
Machine Learning
Polynomial-Time Decomposition Algorithms for Support Vector Machines
Machine Learning
A note on the decomposition methods for support vector regression
Neural Computation
Provably Fast Training Algorithms for Support Vector Machines
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Support Vector Machines: Training and Applications
Support Vector Machines: Training and Applications
Lagrangian support vector machines
The Journal of Machine Learning Research
A Classification Framework for Anomaly Detection
The Journal of Machine Learning Research
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Training support vector machines via SMO-type decomposition methods
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
The analysis of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
Asymptotic convergence of an SMO algorithm without any assumptions
IEEE Transactions on Neural Networks
A formal analysis of stopping criteria of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Neighborhood Property--Based Pattern Selection for Support Vector Machines
Neural Computation
General Polynomial Time Decomposition Algorithms
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
On the complexity of working set selection
Theoretical Computer Science
Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks
The Journal of Machine Learning Research
A support vector machine with integer parameters
Neurocomputing
Gaps in support vector optimization
COLT'07 Proceedings of the 20th annual conference on Learning theory
Generalized SMO-style decomposition algorithms
COLT'07 Proceedings of the 20th annual conference on Learning theory
Density-based similarity measures for content based search
Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
Computational Optimization and Applications
Radial kernels and their reproducing kernel Hilbert spaces
Journal of Complexity
The Journal of Machine Learning Research
Multi kernel learning with online-batch optimization
The Journal of Machine Learning Research
Stochastic dual coordinate ascent methods for regularized loss
The Journal of Machine Learning Research
Computational Optimization and Applications
Hi-index | 0.00 |
We describe polynomial--time algorithms that produce approximate solutions with guaranteed accuracy for a class of QP problems that are used in the design of support vector machine classifiers. These algorithms employ a two--stage process where the first stage produces an approximate solution to a dual QP problem and the second stage maps this approximate dual solution to an approximate primal solution. For the second stage we describe an O(n log n) algorithm that maps an approximate dual solution with accuracy (2(2Km)1/2+8(λ)1/2)-2 λ εp2 to an approximate primal solution with accuracy εp where n is the number of data samples, Kn is the maximum kernel value over the data and λ 0 is the SVM regularization parameter. For the first stage we present new results for decomposition algorithms and describe new decomposition algorithms with guaranteed accuracy and run time. In particular, for τ-rate certifying decomposition algorithms we establish the optimality of τ = 1/(n-1). In addition we extend the recent τ = 1/(n-1) algorithm of Simon (2004) to form two new composite algorithms that also achieve the τ = 1/(n-1) iteration bound of List and Simon (2005), but yield faster run times in practice. We also exploit the τ-rate certifying property of these algorithms to produce new stopping rules that are computationally efficient and that guarantee a specified accuracy for the approximate dual solution. Furthermore, for the dual QP problem corresponding to the standard classification problem we describe operational conditions for which the Simon and composite algorithms possess an upper bound of O(n) on the number of iterations. For this same problem we also describe general conditions for which a matching lower bound exists for any decomposition algorithm that uses working sets of size 2. For the Simon and composite algorithms we also establish an O(n2) bound on the overall run time for the first stage. Combining the first and second stages gives an overall run time of O(n2(ck + 1)) where ck is an upper bound on the computation to perform a kernel evaluation. Pseudocode is presented for a complete algorithm that inputs an accuracy εp and produces an approximate solution that satisfies this accuracy in low order polynomial time. Experiments are included to illustrate the new stopping rules and to compare the Simon and composite decomposition algorithms.