Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Parallel and Distributed Computation: Numerical Methods
Parallel and Distributed Computation: Numerical Methods
Atomic Decomposition by Basis Pursuit
SIAM Review
Interior-Point Methods for Massive Support Vector Machines
SIAM Journal on Optimization
Solution Methodologies for the Smallest Enclosing Circle Problem
Computational Optimization and Applications
New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds
Mathematical Programming: Series A and B
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
A coordinate gradient descent method for nonsmooth separable minimization
Mathematical Programming: Series A and B
Computational Optimization and Applications
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ∈-optimal solution in $\mathcal{O}(n^{2}/\epsilon)$ iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.