Making large-scale support vector machine learning practical
Advances in kernel methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Scalable Parallel Programming with CUDA
Queue - GPU Computing
Fast support vector machine training and classification on graphics processors
Proceedings of the 25th international conference on Machine learning
Introduction to Machine Learning
Introduction to Machine Learning
Complexity and multithreaded implementation analysis of one class-classifiers fuzzy combiner
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part II
Which is the best multiclass SVM method? an empirical study
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Parallel sequential minimal optimization for the training of support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This article focuses on the problem how to shorten the time required for training and decision making by classifiers based on Support Vector Machines techniques. We propose the hybrid implementation of mentioned above algorithm which uses parallel implementation of SVM based on GPU programming model in a distributed computing system using MPI protocol. To estimate the computational efficiency of the proposed model a number of experiments were carried out on the basis of UCI benchmark datasets. Their results show that using parallel model in distributed computing environment can reduce computation time compared to both classical SVM used single processor only and to SVM implementation based on GPU.