Primal-dual interior-point methods
Primal-dual interior-point methods
GEMM-based level 3 BLAS: high-performance model implementations and performance evaluation benchmark
ACM Transactions on Mathematical Software (TOMS)
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Interior-Point Methods for Massive Support Vector Machines
SIAM Journal on Optimization
A parallel mixture of SVMs for very large scale problems
Neural Computation
A parallel solver for large quadratic programs in training support vector machines
Parallel Computing - Special issue: Parallel computing in numerical optimization
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Development of mixed mode MPI / OpenMP applications
Scientific Programming
Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems
The Journal of Machine Learning Research
Solving multiclass support vector machines with LaRank
Proceedings of the 24th international conference on Machine learning
Anatomy of high-performance matrix multiplication
ACM Transactions on Mathematical Software (TOMS)
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
A fast parallel optimization for training support vector machine
MLDM'03 Proceedings of the 3rd international conference on Machine learning and data mining in pattern recognition
Parallel Approach for Ensemble Learning with Locally Coupled Neural Networks
Neural Processing Letters
Exploiting separability in large-scale linear support vector machine training
Computational Optimization and Applications
Selective block minimization for faster convergence of limited memory large-scale linear models
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Review: Supervised classification and mathematical optimization
Computers and Operations Research
Hi-index | 0.00 |
Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of linear Support Vector Machine training has been developed, using a combination of MPI and OpenMP. Using an interior point method for the optimization and a reformulation that avoids the dense Hessian matrix, the structure of the augmented system matrix is exploited to partition data and computations amongst parallel processors efficiently. The new implementation has been applied to solve problems from the PASCAL Challenge on Large-scale Learning. We show that our approach is competitive, and is able to solve problems in the Challenge many times faster than other parallel approaches. We also demonstrate that the hybrid version performs more efficiently than the version using pure MPI.