New Inexact Parallel Variable Distribution Algorithms
Computational Optimization and Applications
Testing Parallel Variable Transformation
Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part II
Computational Optimization and Applications
SSVM: A Smooth Support Vector Machine for Classification
Computational Optimization and Applications
Parallel Variable Distribution for Constrained Optimization
Computational Optimization and Applications
Parallel Constrained Optimization via Distribution of Variables
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
Parallel NLP Strategies Using PVM on Heterogeneous Distributed Environments
Proceedings of the 6th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
On a second order parallel variable transformation approach
The Korean Journal of Computational & Applied Mathematics
A Feature Selection Newton Method for Support Vector Machine Classification
Computational Optimization and Applications
Computing Aviation Sparing Policies: Solving a Large Nonlinear Integer Program
Computational Optimization and Applications
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
A syncro-parallel nonsmooth PGD algorithm for nonsmooth optimization
Journal of Applied Mathematics and Computing
Sprouting search-an algorithmic framework for asynchronous parallel unconstrained optimization
Optimization Methods & Software
A method for solving the system of linear equations and linear inequalities
Mathematical and Computer Modelling: An International Journal
Finite Newton method for implicit Lagrangian support vector regression
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.00 |
A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of $k$ parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton, or conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the $k$ points found by the $k$ processors. A more general synchronization step, applicable to convex as well as nonconvex functions, consists of taking the best point found by the $k$ processors or any point that is better. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicates a speedup of the order of the number of processors employed.