The nature of statistical learning theory
The nature of statistical learning theory
Coupled optimization in protein docking
RECOMB '99 Proceedings of the third annual international conference on Computational molecular biology
Shrinking the tube: a new support vector regression algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Robust Linear and Support Vector Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
SSVM: A Smooth Support Vector Machine for Classification
Computational Optimization and Applications
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
Large Scale Kernel Regression via Linear Programming
Machine Learning
Convex Quadratic Approximation
Computational Optimization and Applications
Multi-funnel optimization using Gaussian underestimation
Journal of Global Optimization
Optimization Methods & Software
Hi-index | 0.00 |
A function on Rn with multiple local minima is approximated from below, via linear programming, by a linear combination of convex kernel functions using sample points from the given function. The resulting convex kernel underestimator is then minimized, using either a linear equation solver for a linear-quadratic kernel or by a Newton method for a Gaussian kernel, to obtain an approximation to a global minimum of the original function. Successive shrinking of the original search region to which this procedure is applied leads to fairly accurate estimates, within 0.0001% for a Gaussian kernel function, relative to global minima of synthetic nonconvex piecewise-quadratic functions for which the global minima are known exactly. Gaussian kernel underestimation improves by a factor of ten the relative error obtained using a piecewise-linear underestimator (O.L. Mangasarian, J.B. Rosen, and M.E. Thompson, Journal of Global Optimization, Volume 32, Number 1, Pages 1---9, 2005), while cutting computational time by an average factor of over 28.