Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems
The Journal of Machine Learning Research
Nonparametric density deconvolution by weighted kernel estimators
Statistics and Computing
Decomposition Algorithms for Training Large-Scale Semiparametric Support Vector Machines
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Learning the optimal neighborhood kernel for classification
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
New approximation algorithms for minimum enclosing convex shapes
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Expert Systems with Applications: An International Journal
Cross-modal social image clustering and tag cleansing
Journal of Visual Communication and Image Representation
Computational Optimization and Applications
Hi-index | 0.00 |
There are many applications related to singly linearly constrained quadratic programs subjected to upper and lower bounds. In this paper, a new algorithm based on secant approximation is provided for the case in which the Hessian matrix is diagonal and positive definite. To deal with the general case where the Hessian is not diagonal, a new efficient projected gradient algorithm is proposed. The basic features of the projected gradient algorithm are: 1) a new formula is used for the stepsize; 2) a recently-established adaptive non-monotone line search is incorporated; and 3) the optimal stepsize is determined by quadratic interpolation if the non-monotone line search criterion fails to be satisfied. Numerical experiments on large-scale random test problems and some medium-scale quadratic programs arising in the training of Support Vector Machines demonstrate the usefulness of these algorithms.