Introduction to artificial neural systems
Introduction to artificial neural systems
Optimal mapping of neural-network learning on message-passing multicomputers
Journal of Parallel and Distributed Computing - Special issue on neural computing on massively parallel processing
Parallel neural computing based on network duplicating
Parallel algorithms
Rectilinear partitioning of irregular data parallel computations
Journal of Parallel and Distributed Computing
Scheduling Divisible Loads in Parallel and Distributed Systems
Scheduling Divisible Loads in Parallel and Distributed Systems
Distributed Computing: Associated Combinatorial Problems
Distributed Computing: Associated Combinatorial Problems
IEEE Transactions on Parallel and Distributed Systems
Optimizing Computing Costs Using Divisible Load Analysis
IEEE Transactions on Parallel and Distributed Systems
Parallel implementation of backpropagation neural networks on a heterogeneous array of transputers
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Multilayer discrete-time neural-net controller with guaranteed performance
IEEE Transactions on Neural Networks
Future Generation Computer Systems
The Journal of Supercomputing
Grid enabled master slave task scheduling for heterogeneous processor paradigm
GCC'05 Proceedings of the 4th international conference on Grid and Cooperative Computing
A new cuckoo search based levenberg-marquardt (CSLM) algorithm
ICCSA'13 Proceedings of the 13th international conference on Computational Science and Its Applications - Volume 1
Hi-index | 0.00 |
The focus of this study is how we can efficiently implement a novel neural network algorithm on distributed systems for concurrent execution. We assume a distributed system with heterogeneous computers and that the neural network is replicated on each computer. We propose an architecture model with efficient pattern allocation that takes into account the speed of processors and overlaps the communication with computation. The training pattern set is distributed among the heterogeneous processors with the mapping being fixed during the learning process. We provide a heuristic pattern allocation algorithm minimizing the execution time of neural network learning. The computations are overlapped with communications. Under the condition that each processor has to perform a task directly proportional to its speed, we show that the pattern allocation is a polynomial-time problem, solvable by dynamic programming.