Fast support vector machine training and classification on graphics processors
Proceedings of the 25th international conference on Machine learning
Neural Information Processing
A Parallel Implementation of Error Correction SVM with Applications to Face Recognition
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part II
Fast support vector machines for continuous data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on cybernetics and cognitive informatics
Parallel multiclass classification using SVMs on GPUs
Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units
Improved computation for Levenberg-Marquardt training
IEEE Transactions on Neural Networks
A MapReduce-based distributed SVM algorithm for automatic image annotation
Computers & Mathematics with Applications
A novel distributed machine learning method for classification: parallel covering algorithm
RSKT'12 Proceedings of the 7th international conference on Rough Sets and Knowledge Technology
Performance evaluation of hybrid implementation of support vector machine
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Multi-threaded support vector machines for pattern recognition
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Parallel multitask cross validation for Support Vector Machine using GPU
Journal of Parallel and Distributed Computing
An efficient classification approach for large-scale mobile ubiquitous computing
Information Sciences: an International Journal
Hi-index | 0.00 |
Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.