The nature of statistical learning theory
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
A parallel solver for large quadratic programs in training support vector machines
Parallel Computing - Special issue: Parallel computing in numerical optimization
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems
The Journal of Machine Learning Research
Fast support vector machine training and classification on graphics processors
Proceedings of the 25th international conference on Machine learning
P-packSVM: Parallel Primal grAdient desCent Kernel SVM
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Hybrid MPI/OpenMP Parallel Linear Support Vector Machine Training
The Journal of Machine Learning Research
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Parallel sequential minimal optimization for the training of support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Support vector machines (SVMs) are a widely used technique for classification, clustering and data analysis. While efficient algorithms for training SVM are available, dealing with large datasets makes training and classification a computationally challenging problem. In this paper we exploit modern processor architectures to improve the training speed of LIBSVM, a well known implementation of the sequential minimal optimization algorithm. We describe LIBSVM"C"B"E, an optimized version of LIBSVM which takes advantage of the peculiar architecture of the Cell Broadband Engine. We assess the performance of LIBSVM"C"B"E on real-world training problems, and we show how this optimization is particularly effective on large, dense datasets.