Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
The nature of statistical learning theory
The nature of statistical learning theory
Handling concept drifts in incremental learning with support vector machines
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
A parallel mixture of SVMs for very large scale problems
Neural Computation
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Incremental Learning with Support Vector Machines
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Are loss functions all the same?
Neural Computation
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
MapReduce: simplified data processing on large clusters
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Bioinformatics
Learning support vector machines from distributed data sources
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 4
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Distributed Parallel Support Vector Machines in Strongly Connected Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In conventional distributed machine learning methods, distributed support vector machines (SVM) algorithms are trained over pre-configured intranet/internet environments to find out an optimal classifier. These methods are very complicated and costly for large datasets. Hence, we propose a method that is referred as the Cloud SVM training mechanism (CloudSVM) in a cloud computing environment with MapReduce technique for distributed machine learning applications. Accordingly, (i) SVM algorithm is trained in distributed cloud storage servers that work concurrently; (ii) merge all support vectors in every trained cloud node; and (iii) iterate these two steps until the SVM converges to the optimal classifier function. Single computer is incapable to train SVM algorithm with large scale data sets. The results of this study are important for training of large scale data sets for machine learning applications. We provided that iterative training of splitted data set in cloud computing environment using SVM will converge to a global optimal classifier in finite iteration size.