Using and designing massively parallel computers for artificial neural networks
Journal of Parallel and Distributed Computing - Special issue on neural computing on massively parallel processing
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Parallel Implementations of Backpropagation Neural Networks on Transputers: A Study of Training Set Parallelism
Hi-index | 0.00 |
In this article we develop a batch variant of the ART 2 classification algorithm invented by Carpenter and Grossberg. Our algorithm exploits training example parallelism while leaving the overall design of the ART 2 network unchanged such that a significant reduction of the execution time can be achieved on a multiprocessor system. We present a parallel implementation strategy and analyze it w.r.t. execution time and speedup. As our algorithm naturally benefits from data parallelism, the implementation uses data parallel skeletons of the Muenster skeleton library Muesli. We show that skeletons are an efficient way to write parallel applications compared to a manual MPI implementation.