A massively parallel architecture for distributed genetic algorithms
Parallel Computing - Special issue: Parallel and nature-inspired computational paradigms and applications
Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems
The Journal of Machine Learning Research
Study of neural net training methods in parallel and distributed architectures
Future Generation Computer Systems
Parallel Training Strategy Based on Support Vector Regression Machine
PRDC '09 Proceedings of the 2009 15th IEEE Pacific Rim International Symposium on Dependable Computing
Parallel implementation of Artificial Neural Network training for speech recognition
Pattern Recognition Letters
A geometrical representation of McCulloch-Pitts neural model and its applications
IEEE Transactions on Neural Networks
Parallel sequential minimal optimization for the training of support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we propose a novel distributed machine learning method: Parallel Covering Algorithm, which is inspired by the module feature of CA (Covering Algorithm). Classic method of CA is presented, and we analyze its independent part. Then we develop the Parallel CA by utilizing its modularity as well as data-set decomposition. Detailed implementation of the parallel computing process is described. In the experiment, three data sets are used to evaluate the Parallel CA, and the comparison with classic CA is also shown in the paper. Speedup and efficiency are two criterions to evaluate the performance of the algorithm. Both the analysis and the comparison indicate that the Parallel CA is more effective than CA. We also empirically compare the results obtained by Parallel SVM on a large data set, and it shows that our proposed algorithm is effective.