Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
A dual method for certain positive semidefinite quadratic programming problems
SIAM Journal on Scientific and Statistical Computing
Decomposition algorithm for convex differentiable minimization
Journal of Optimization Theory and Applications
Asymptotic properties of the Fenchel dual functional and applications to decomposition problems
Journal of Optimization Theory and Applications
The nature of statistical learning theory
The nature of statistical learning theory
Decomposition Methods for Differentiable Optimization Problems overCartesian Product Sets
Computational Optimization and Applications
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Evolution towards the Maximum Clique
Journal of Global Optimization
Interior-Point Methods for Massive Support Vector Machines
SIAM Journal on Optimization
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
On the convergence of the block nonlinear Gauss-Seidel method under convex constraints
Operations Research Letters
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
Asymptotic convergence of an SMO algorithm without any assumptions
IEEE Transactions on Neural Networks
A formal analysis of stopping criteria of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Multi-Standard Quadratic Optimization: interior point methods and cone programming reformulation
Computational Optimization and Applications
Exploiting separability in large-scale linear support vector machine training
Computational Optimization and Applications
Review: Supervised classification and mathematical optimization
Computers and Operations Research
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.